Jan 26 16:05:23 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 16:05:23 crc restorecon[4578]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:23 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:05:24 crc restorecon[4578]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 16:05:24 crc kubenswrapper[4680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:05:24 crc kubenswrapper[4680]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 16:05:24 crc kubenswrapper[4680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:05:24 crc kubenswrapper[4680]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:05:24 crc kubenswrapper[4680]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 16:05:24 crc kubenswrapper[4680]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:05:24 crc kubenswrapper[4680]: I0126 16:05:24.995135 4680 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998783 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998799 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998805 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998809 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998814 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998818 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998822 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998826 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998830 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998835 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998839 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998844 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998848 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998852 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998855 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998859 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998862 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998867 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998870 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998874 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998878 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998883 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998887 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998892 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998896 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998900 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998904 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998908 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998913 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:05:24 crc kubenswrapper[4680]: W0126 16:05:24.998917 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998940 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998944 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998948 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998952 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998956 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998960 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998964 4680 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998968 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998973 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998977 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998981 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998985 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998989 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998992 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.998996 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999000 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999004 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999008 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999012 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999015 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999019 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999022 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999026 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999029 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999033 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999036 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999041 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999044 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999048 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999052 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999055 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999060 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999085 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999090 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999095 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999099 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999103 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999106 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999110 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999114 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:24.999118 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999332 4680 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999343 4680 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999352 4680 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999358 4680 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999364 4680 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999368 4680 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999373 4680 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999379 4680 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999384 4680 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999388 4680 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999392 4680 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999397 4680 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999401 4680 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999406 4680 flags.go:64] FLAG: --cgroup-root="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999410 4680 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999414 4680 flags.go:64] FLAG: --client-ca-file="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999418 4680 flags.go:64] FLAG: --cloud-config="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999421 4680 flags.go:64] FLAG: --cloud-provider="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999425 4680 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999430 4680 flags.go:64] FLAG: --cluster-domain="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999435 4680 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999440 4680 flags.go:64] FLAG: --config-dir="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999444 4680 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999449 4680 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999455 4680 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999460 4680 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999464 4680 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999469 4680 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999473 4680 flags.go:64] FLAG: --contention-profiling="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999477 4680 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999481 4680 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999486 4680 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999490 4680 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999495 4680 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999499 4680 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999503 4680 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999507 4680 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999511 4680 flags.go:64] FLAG: --enable-server="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999514 4680 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999520 4680 flags.go:64] FLAG: --event-burst="100" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999525 4680 flags.go:64] FLAG: --event-qps="50" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999529 4680 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999533 4680 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999538 4680 flags.go:64] FLAG: --eviction-hard="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999542 4680 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999546 4680 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999550 4680 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999555 4680 flags.go:64] FLAG: --eviction-soft="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999559 4680 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999563 4680 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999567 4680 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999571 4680 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999575 4680 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999579 4680 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999583 4680 flags.go:64] FLAG: --feature-gates="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999588 4680 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999592 4680 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999596 4680 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999600 4680 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999604 4680 flags.go:64] FLAG: --healthz-port="10248" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999608 4680 flags.go:64] FLAG: --help="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999612 4680 flags.go:64] FLAG: --hostname-override="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999616 4680 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999622 4680 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999626 4680 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999630 4680 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999635 4680 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999640 4680 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999643 4680 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999648 4680 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999652 4680 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999656 4680 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999661 4680 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999665 4680 flags.go:64] FLAG: --kube-reserved="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999669 4680 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999673 4680 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999677 4680 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999681 4680 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999685 4680 flags.go:64] FLAG: --lock-file="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999690 4680 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999694 4680 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999699 4680 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999709 4680 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999713 4680 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999717 4680 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999722 4680 flags.go:64] FLAG: --logging-format="text" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999726 4680 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999731 4680 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999735 4680 flags.go:64] FLAG: --manifest-url="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999740 4680 flags.go:64] FLAG: --manifest-url-header="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999746 4680 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999750 4680 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999756 4680 flags.go:64] FLAG: --max-pods="110" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999760 4680 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999764 4680 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999769 4680 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999773 4680 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999777 4680 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999782 4680 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999787 4680 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999796 4680 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999800 4680 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999804 4680 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999809 4680 flags.go:64] FLAG: --pod-cidr="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999812 4680 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999819 4680 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999823 4680 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999827 4680 flags.go:64] FLAG: --pods-per-core="0" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999831 4680 flags.go:64] FLAG: --port="10250" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999835 4680 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999839 4680 flags.go:64] FLAG: --provider-id="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999843 4680 flags.go:64] FLAG: --qos-reserved="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999847 4680 flags.go:64] FLAG: --read-only-port="10255" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999851 4680 flags.go:64] FLAG: --register-node="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999856 4680 flags.go:64] FLAG: --register-schedulable="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999860 4680 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999867 4680 flags.go:64] FLAG: --registry-burst="10" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999871 4680 flags.go:64] FLAG: --registry-qps="5" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999875 4680 flags.go:64] FLAG: --reserved-cpus="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999879 4680 flags.go:64] FLAG: --reserved-memory="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999884 4680 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999888 4680 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999892 4680 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999896 4680 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999900 4680 flags.go:64] FLAG: --runonce="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999904 4680 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999908 4680 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999913 4680 flags.go:64] FLAG: --seccomp-default="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999917 4680 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999921 4680 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999925 4680 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999930 4680 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999934 4680 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999938 4680 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999942 4680 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999946 4680 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999950 4680 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999954 4680 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999958 4680 flags.go:64] FLAG: --system-cgroups="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999963 4680 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999970 4680 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999975 4680 flags.go:64] FLAG: --tls-cert-file="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999979 4680 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999983 4680 flags.go:64] FLAG: --tls-min-version="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999989 4680 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999993 4680 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:24.999997 4680 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000001 4680 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000006 4680 flags.go:64] FLAG: --v="2" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000011 4680 flags.go:64] FLAG: --version="false" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000016 4680 flags.go:64] FLAG: --vmodule="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000022 4680 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000026 4680 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000155 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000160 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000165 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000169 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000173 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000177 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000182 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000186 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000191 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000195 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000199 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000203 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000207 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000210 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000214 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000218 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000222 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000225 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000229 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000234 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000239 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000244 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000249 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000254 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000258 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000263 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000268 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000273 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000278 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000282 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000286 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000291 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000296 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000301 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000311 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000315 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000319 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000323 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000330 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000336 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000341 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000345 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000350 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000354 4680 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000359 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000362 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000366 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000370 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000374 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000379 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000383 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000388 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000393 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000399 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000405 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000409 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000413 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000417 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000421 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000425 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000429 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000433 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000437 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000441 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000447 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000451 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000455 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000459 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000463 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000467 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.000471 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.000478 4680 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.008387 4680 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.008406 4680 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008484 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008489 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008494 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008498 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008502 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008505 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008509 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008512 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008517 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008521 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008526 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008531 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008535 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008539 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008543 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008547 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008550 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008554 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008558 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008562 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008565 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008569 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008572 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008576 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008580 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008585 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008588 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008592 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008596 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008602 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008607 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008613 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008619 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008625 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008629 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008634 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008639 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008644 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008648 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008653 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008658 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008662 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008667 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008672 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008676 4680 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008681 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008686 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008690 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008693 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008697 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008701 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008704 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008708 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008711 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008715 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008719 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008722 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008725 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008729 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008734 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008738 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008744 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008749 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008754 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008758 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008762 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008766 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008771 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008775 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008779 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008783 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.008789 4680 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008915 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008921 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008926 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008929 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008933 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008936 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008940 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008944 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008948 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008952 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008955 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008959 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008963 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008967 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008970 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008974 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008977 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008981 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008985 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008989 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008992 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.008996 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009000 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009005 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009010 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009014 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009018 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009022 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009026 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009030 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009033 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009037 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009041 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009045 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009048 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009052 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009055 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009059 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009062 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009087 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009091 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009094 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009098 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009102 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009105 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009109 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009113 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009118 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009121 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009125 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009129 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009133 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009136 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009140 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009144 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009148 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009151 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009155 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009158 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009162 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009166 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009170 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009173 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009178 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009183 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009188 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009192 4680 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009195 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009199 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009203 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.009207 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.009212 4680 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.009461 4680 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.011778 4680 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.011845 4680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.012296 4680 server.go:997] "Starting client certificate rotation" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.012317 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.012804 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-28 18:58:57.161125044 +0000 UTC Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.012919 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.018177 4680 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.019518 4680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.020473 4680 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.033539 4680 log.go:25] "Validated CRI v1 runtime API" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.060735 4680 log.go:25] "Validated CRI v1 image API" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.062513 4680 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.066323 4680 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-16-00-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.066395 4680 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.088378 4680 manager.go:217] Machine: {Timestamp:2026-01-26 16:05:25.086529301 +0000 UTC m=+0.247801660 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6bbe44ff-394c-4d30-89b4-d488d80b2762 BootID:c9179394-fa64-4ce2-b2e0-fe9933369765 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:82:40:7f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:82:40:7f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:d1:08 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2b:93:48 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:bc:02:51 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f8:5c:78 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:51:2e:6f:34:5f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:d4:eb:a2:5d:0d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.088781 4680 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.089374 4680 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.090416 4680 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.090808 4680 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.090864 4680 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.091272 4680 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.091292 4680 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.091655 4680 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.091716 4680 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.092182 4680 state_mem.go:36] "Initialized new in-memory state store" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.092318 4680 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.093188 4680 kubelet.go:418] "Attempting to sync node with API server" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.093227 4680 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.093268 4680 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.093288 4680 kubelet.go:324] "Adding apiserver pod source" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.093328 4680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.094741 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.094808 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.094882 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.095061 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.095764 4680 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.096190 4680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.096964 4680 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097496 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097518 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097525 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097532 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097544 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097552 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097558 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097569 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097579 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097586 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097596 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097602 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.097803 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.098154 4680 server.go:1280] "Started kubelet" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.098586 4680 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.098585 4680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.100198 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:25 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.100327 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e5382db96bfbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:05:25.098127295 +0000 UTC m=+0.259399564,LastTimestamp:2026-01-26 16:05:25.098127295 +0000 UTC m=+0.259399564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.101397 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.101430 4680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.101454 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:35:27.059092294 +0000 UTC Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.102618 4680 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.102654 4680 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.107479 4680 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.107531 4680 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.108769 4680 server.go:460] "Adding debug handlers to kubelet server" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.110278 4680 factory.go:55] Registering systemd factory Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.110314 4680 factory.go:221] Registration of the systemd container factory successfully Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.110523 4680 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.112403 4680 factory.go:153] Registering CRI-O factory Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.112442 4680 factory.go:221] Registration of the crio container factory successfully Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.112546 4680 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.112579 4680 factory.go:103] Registering Raw factory Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.112607 4680 manager.go:1196] Started watching for new ooms in manager Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.113219 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.113823 4680 manager.go:319] Starting recovery of all containers Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.114060 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.114226 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125057 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125163 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125190 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125211 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125232 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125254 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125275 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125296 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125322 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125344 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125367 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125388 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125411 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125437 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125459 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125481 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125504 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125542 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125569 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125597 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125624 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125645 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125666 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125688 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125708 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125728 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125754 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125776 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125830 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125852 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125872 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125893 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125912 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125936 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125956 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125976 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.125998 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126018 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126039 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126061 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126111 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126131 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126151 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126173 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126195 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126217 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126239 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126261 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126280 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126303 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126326 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126346 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126374 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126399 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126421 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126443 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126465 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126484 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126504 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126524 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126543 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126562 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126582 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126601 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126621 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126642 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126664 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126685 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126707 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126730 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126754 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126776 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126802 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126823 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126844 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126864 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126919 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126941 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126960 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.126979 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127000 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127825 4680 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127878 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127906 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127936 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127962 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.127990 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128018 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128047 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128109 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128137 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128161 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128180 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128199 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128216 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128236 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128256 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128281 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128299 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128317 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128335 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128355 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128373 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128391 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128412 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128440 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128461 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128482 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128503 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128523 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128555 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128577 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128599 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128622 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128641 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128660 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128678 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128696 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128714 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128733 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128754 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128779 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128800 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128819 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128836 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128853 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128872 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128890 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128909 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128931 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128948 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128965 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.128982 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129001 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129018 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129037 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129054 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129096 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129121 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129147 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129172 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129200 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129225 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129251 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129280 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129303 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129321 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129339 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129357 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129375 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129395 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129415 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.129433 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132014 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132564 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132586 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132605 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132628 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132650 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132676 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132703 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132744 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132772 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132799 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132827 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132854 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132878 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132897 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132915 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132935 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132954 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132973 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.132993 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133013 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133030 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133049 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133100 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133123 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133143 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133161 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133179 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133198 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133216 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133236 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133255 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133273 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133292 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133312 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133331 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133349 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133367 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133385 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133404 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133424 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133441 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133461 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133478 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133497 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133516 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133535 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133554 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133572 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133591 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133610 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133629 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133648 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133666 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133684 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133701 4680 reconstruct.go:97] "Volume reconstruction finished" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.133715 4680 reconciler.go:26] "Reconciler: start to sync state" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.157757 4680 manager.go:324] Recovery completed Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.166450 4680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.168313 4680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.168366 4680 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.168403 4680 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.168454 4680 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.169153 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.169214 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.170914 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.173125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.173161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.173171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.174162 4680 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.174178 4680 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.174197 4680 state_mem.go:36] "Initialized new in-memory state store" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.211347 4680 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.266284 4680 policy_none.go:49] "None policy: Start" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.267837 4680 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.267942 4680 state_mem.go:35] "Initializing new in-memory state store" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.268883 4680 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.312473 4680 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.314115 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.322683 4680 manager.go:334] "Starting Device Plugin manager" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.323762 4680 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.324440 4680 server.go:79] "Starting device plugin registration server" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.324913 4680 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.324929 4680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.325232 4680 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.325306 4680 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.325313 4680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.341292 4680 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.426859 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.428441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.428488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.428502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.428532 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.429110 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.470053 4680 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.470287 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.471906 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.471979 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.471996 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.472251 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.472792 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.472893 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.473312 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.473359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.473374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.473494 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.473640 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.473697 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.474304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.474362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.474379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.475562 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.475601 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.475618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.475726 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.475853 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.475889 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476780 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476777 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.476976 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477064 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477798 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477860 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477905 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.477923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.478414 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.478484 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.479669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.479717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.479726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.479758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.479776 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.479732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.539799 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.539872 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.539904 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.539928 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.539950 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.539988 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540015 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540039 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540091 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540116 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540186 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540245 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540322 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540360 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.540385 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.629830 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.631982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.632054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.632104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.632150 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.633109 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641454 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641538 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641583 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641622 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641664 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641695 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641704 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641765 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641801 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641816 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641838 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641871 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641903 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641914 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641934 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.641998 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642039 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642102 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642042 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642142 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642196 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642007 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642269 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642304 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642359 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642398 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642505 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.642536 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: E0126 16:05:25.714856 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.816537 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.844386 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.852165 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d5a8080cb657dd522225505139d431e7f86a593b40d76fdd12509456beef9f9f WatchSource:0}: Error finding container d5a8080cb657dd522225505139d431e7f86a593b40d76fdd12509456beef9f9f: Status 404 returned error can't find the container with id d5a8080cb657dd522225505139d431e7f86a593b40d76fdd12509456beef9f9f Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.860616 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.869555 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-83ec64bde4dc3c2ecf6e2a2421565cdbcf7a2e8944d067ee45a30bb2bac11654 WatchSource:0}: Error finding container 83ec64bde4dc3c2ecf6e2a2421565cdbcf7a2e8944d067ee45a30bb2bac11654: Status 404 returned error can't find the container with id 83ec64bde4dc3c2ecf6e2a2421565cdbcf7a2e8944d067ee45a30bb2bac11654 Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.877770 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3a48990ad40d22490db8dcf075b1aeaef38184d9c80a411b7b5834abca837076 WatchSource:0}: Error finding container 3a48990ad40d22490db8dcf075b1aeaef38184d9c80a411b7b5834abca837076: Status 404 returned error can't find the container with id 3a48990ad40d22490db8dcf075b1aeaef38184d9c80a411b7b5834abca837076 Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.885993 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: I0126 16:05:25.897167 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.901642 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d3153fc3c95758d7e8fa0470b76e6e0796cbcd4723b8076313834636708332f3 WatchSource:0}: Error finding container d3153fc3c95758d7e8fa0470b76e6e0796cbcd4723b8076313834636708332f3: Status 404 returned error can't find the container with id d3153fc3c95758d7e8fa0470b76e6e0796cbcd4723b8076313834636708332f3 Jan 26 16:05:25 crc kubenswrapper[4680]: W0126 16:05:25.915470 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c7ddb6f25b94cc3756fa27786747ca875af200410b1476d51b7bec3fdb8c0c8c WatchSource:0}: Error finding container c7ddb6f25b94cc3756fa27786747ca875af200410b1476d51b7bec3fdb8c0c8c: Status 404 returned error can't find the container with id c7ddb6f25b94cc3756fa27786747ca875af200410b1476d51b7bec3fdb8c0c8c Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.033660 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.036763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.036813 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.036823 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.036852 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.037536 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 26 16:05:26 crc kubenswrapper[4680]: W0126 16:05:26.050435 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.050531 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.101035 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.102006 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:39:16.847282958 +0000 UTC Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.174289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"83ec64bde4dc3c2ecf6e2a2421565cdbcf7a2e8944d067ee45a30bb2bac11654"} Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.175873 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5a8080cb657dd522225505139d431e7f86a593b40d76fdd12509456beef9f9f"} Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.177574 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c7ddb6f25b94cc3756fa27786747ca875af200410b1476d51b7bec3fdb8c0c8c"} Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.179240 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d3153fc3c95758d7e8fa0470b76e6e0796cbcd4723b8076313834636708332f3"} Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.181394 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3a48990ad40d22490db8dcf075b1aeaef38184d9c80a411b7b5834abca837076"} Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.516160 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Jan 26 16:05:26 crc kubenswrapper[4680]: W0126 16:05:26.517429 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.517491 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:26 crc kubenswrapper[4680]: W0126 16:05:26.618692 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.618758 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:26 crc kubenswrapper[4680]: W0126 16:05:26.629854 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.629913 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.838572 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.843014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.843080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.843091 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:26 crc kubenswrapper[4680]: I0126 16:05:26.843122 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:26 crc kubenswrapper[4680]: E0126 16:05:26.843679 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.037197 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:05:27 crc kubenswrapper[4680]: E0126 16:05:27.038357 4680 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.101680 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.102790 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:41:57.15600105 +0000 UTC Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.185948 4680 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f" exitCode=0 Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.186160 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.186237 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.192034 4680 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1" exitCode=0 Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.192541 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.192579 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.192849 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.192936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.192959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.193506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.193541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.193552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.196360 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.196388 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.196398 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.197923 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c" exitCode=0 Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.197977 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.198046 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.198872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.199002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.199151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.200512 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="cc0f7bd06fb3dd9377d50b89d13d806787f06d28576db2d0d8facf987caa34f1" exitCode=0 Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.200647 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"cc0f7bd06fb3dd9377d50b89d13d806787f06d28576db2d0d8facf987caa34f1"} Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.200689 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.201679 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.202504 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.202543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.202557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.203092 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.203116 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:27 crc kubenswrapper[4680]: I0126 16:05:27.203126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:27 crc kubenswrapper[4680]: E0126 16:05:27.405224 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e5382db96bfbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:05:25.098127295 +0000 UTC m=+0.259399564,LastTimestamp:2026-01-26 16:05:25.098127295 +0000 UTC m=+0.259399564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.101862 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.103776 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:12:03.278586521 +0000 UTC Jan 26 16:05:28 crc kubenswrapper[4680]: E0126 16:05:28.116774 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="3.2s" Jan 26 16:05:28 crc kubenswrapper[4680]: W0126 16:05:28.132766 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 26 16:05:28 crc kubenswrapper[4680]: E0126 16:05:28.132903 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.215168 4680 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d" exitCode=0 Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.215240 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.215359 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.216333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.216364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.216375 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.220164 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.220333 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.225039 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.225195 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.227464 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.227518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.227538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.229402 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.229454 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.231849 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c62222bdb0996eeb7ef310cd37b4fb75c631e560a6820c6d1d9ec9d041020c66"} Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.231939 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.233166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.233218 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.233237 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.444130 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.448960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.449013 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.449025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.449052 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:28 crc kubenswrapper[4680]: I0126 16:05:28.927100 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.104175 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:36:54.139724684 +0000 UTC Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.234965 4680 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062" exitCode=0 Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.235232 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062"} Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.235342 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.236220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.236245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.236254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.239991 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2"} Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.240031 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.241205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.241228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.241240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.246673 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.247115 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.247431 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da"} Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.247461 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424"} Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.247475 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f"} Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.247555 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.248286 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.248310 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.248322 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.248745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.248760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.248768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.252724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.252748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:29 crc kubenswrapper[4680]: I0126 16:05:29.252759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.105030 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:22:36.933368984 +0000 UTC Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253849 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e"} Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189"} Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253906 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253957 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253998 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253993 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.253913 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814"} Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.254136 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.254154 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2"} Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255053 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255063 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.255203 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.559373 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:30 crc kubenswrapper[4680]: I0126 16:05:30.840607 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.100778 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.106467 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:05:33.757392215 +0000 UTC Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.259610 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.260092 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.260222 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b"} Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.260424 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.260693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.260720 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.260730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.261461 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.261604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.261731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.261489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.261924 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:31 crc kubenswrapper[4680]: I0126 16:05:31.261939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.106901 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:00:59.220190844 +0000 UTC Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.262100 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.262111 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.262990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.263004 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.263028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.263037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.263009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:32 crc kubenswrapper[4680]: I0126 16:05:32.263105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.107731 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:34:08.593111881 +0000 UTC Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.559634 4680 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.559710 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.577790 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.577925 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.579045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.579084 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.579095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:33 crc kubenswrapper[4680]: I0126 16:05:33.683979 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.106194 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.106444 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.107839 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:46:46.522843643 +0000 UTC Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.108169 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.108198 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.108207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.268231 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.269573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.269631 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.269655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.500470 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.500712 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.503205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.503278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:34 crc kubenswrapper[4680]: I0126 16:05:34.503303 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:35 crc kubenswrapper[4680]: I0126 16:05:35.108213 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:59:35.801226037 +0000 UTC Jan 26 16:05:35 crc kubenswrapper[4680]: E0126 16:05:35.342276 4680 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 16:05:35 crc kubenswrapper[4680]: I0126 16:05:35.877921 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 16:05:35 crc kubenswrapper[4680]: I0126 16:05:35.878264 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:35 crc kubenswrapper[4680]: I0126 16:05:35.879838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:35 crc kubenswrapper[4680]: I0126 16:05:35.879902 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:35 crc kubenswrapper[4680]: I0126 16:05:35.879926 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:36 crc kubenswrapper[4680]: I0126 16:05:36.108891 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:32:49.619822812 +0000 UTC Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.109094 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:24:45.392061401 +0000 UTC Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.916147 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.916376 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.917717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.917762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.917774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:37 crc kubenswrapper[4680]: I0126 16:05:37.920986 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.109666 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:28:02.913244072 +0000 UTC Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.278554 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.280057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.280142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.280160 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.286722 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:38 crc kubenswrapper[4680]: E0126 16:05:38.451995 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 16:05:38 crc kubenswrapper[4680]: W0126 16:05:38.477366 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.477514 4680 trace.go:236] Trace[783450960]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:05:28.475) (total time: 10001ms): Jan 26 16:05:38 crc kubenswrapper[4680]: Trace[783450960]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:05:38.477) Jan 26 16:05:38 crc kubenswrapper[4680]: Trace[783450960]: [10.001975477s] [10.001975477s] END Jan 26 16:05:38 crc kubenswrapper[4680]: E0126 16:05:38.477547 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.514620 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 16:05:38 crc kubenswrapper[4680]: I0126 16:05:38.514733 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 16:05:39 crc kubenswrapper[4680]: W0126 16:05:39.014929 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.015027 4680 trace.go:236] Trace[442374031]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:05:29.013) (total time: 10001ms): Jan 26 16:05:39 crc kubenswrapper[4680]: Trace[442374031]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:05:39.014) Jan 26 16:05:39 crc kubenswrapper[4680]: Trace[442374031]: [10.001560857s] [10.001560857s] END Jan 26 16:05:39 crc kubenswrapper[4680]: E0126 16:05:39.015054 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:05:39 crc kubenswrapper[4680]: W0126 16:05:39.021113 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.021194 4680 trace.go:236] Trace[467006388]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:05:29.019) (total time: 10001ms): Jan 26 16:05:39 crc kubenswrapper[4680]: Trace[467006388]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:05:39.021) Jan 26 16:05:39 crc kubenswrapper[4680]: Trace[467006388]: [10.001493395s] [10.001493395s] END Jan 26 16:05:39 crc kubenswrapper[4680]: E0126 16:05:39.021220 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.102051 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.110417 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:19:19.785060219 +0000 UTC Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.281314 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.282766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.282815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.282828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.736749 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.736837 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.742996 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 16:05:39 crc kubenswrapper[4680]: I0126 16:05:39.743076 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 16:05:40 crc kubenswrapper[4680]: I0126 16:05:40.111138 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:34:19.994105889 +0000 UTC Jan 26 16:05:41 crc kubenswrapper[4680]: I0126 16:05:41.112176 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:32:57.521602544 +0000 UTC Jan 26 16:05:41 crc kubenswrapper[4680]: I0126 16:05:41.652634 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:41 crc kubenswrapper[4680]: I0126 16:05:41.654254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:41 crc kubenswrapper[4680]: I0126 16:05:41.654310 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:41 crc kubenswrapper[4680]: I0126 16:05:41.654326 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:41 crc kubenswrapper[4680]: I0126 16:05:41.654355 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:41 crc kubenswrapper[4680]: E0126 16:05:41.658839 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 16:05:42 crc kubenswrapper[4680]: I0126 16:05:42.113203 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:50:16.871061679 +0000 UTC Jan 26 16:05:42 crc kubenswrapper[4680]: I0126 16:05:42.363423 4680 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 16:05:42 crc kubenswrapper[4680]: I0126 16:05:42.471854 4680 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 16:05:43 crc kubenswrapper[4680]: I0126 16:05:43.113827 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:23:49.379160983 +0000 UTC Jan 26 16:05:43 crc kubenswrapper[4680]: I0126 16:05:43.303890 4680 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 16:05:43 crc kubenswrapper[4680]: I0126 16:05:43.560157 4680 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:05:43 crc kubenswrapper[4680]: I0126 16:05:43.560363 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:05:43 crc kubenswrapper[4680]: I0126 16:05:43.690322 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:43 crc kubenswrapper[4680]: I0126 16:05:43.693651 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.105500 4680 apiserver.go:52] "Watching apiserver" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.108760 4680 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.109127 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.109483 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.109504 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.109534 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.109553 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.109717 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.109803 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.109814 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.110375 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.110422 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.112505 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.112522 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.112529 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.112534 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.113284 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.114064 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:37:16.012410357 +0000 UTC Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.114092 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.114119 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.114122 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.116341 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.155685 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.174938 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.189899 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.198847 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.208571 4680 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.208678 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.218367 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.228716 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.239566 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.302751 4680 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.521920 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.533158 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.533878 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.537533 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.543814 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.553957 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.561564 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.580221 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.591902 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.603232 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.618212 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.650807 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.683336 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.702230 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.714495 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.725563 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.734429 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.736547 4680 trace.go:236] Trace[1344094602]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:05:33.739) (total time: 10996ms): Jan 26 16:05:44 crc kubenswrapper[4680]: Trace[1344094602]: ---"Objects listed" error: 10996ms (16:05:44.736) Jan 26 16:05:44 crc kubenswrapper[4680]: Trace[1344094602]: [10.996755138s] [10.996755138s] END Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.736572 4680 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.737140 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.739140 4680 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.748981 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.750328 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.751265 4680 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.763324 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.774452 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.789944 4680 csr.go:261] certificate signing request csr-g4ljd is approved, waiting to be issued Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.789942 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.796410 4680 csr.go:257] certificate signing request csr-g4ljd is issued Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.803746 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.818168 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.834649 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839512 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839565 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839593 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839626 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839651 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839674 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839697 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839721 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839754 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839775 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839796 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839820 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839843 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839865 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839889 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839915 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839939 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839965 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839988 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.839984 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840011 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840036 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840102 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840088 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840278 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840124 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840339 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840424 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840436 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840446 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840448 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840470 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840493 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840495 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840511 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840561 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840575 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840598 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840614 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840633 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840619 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840651 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840668 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840685 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840705 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840721 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840738 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840757 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840773 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840789 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840804 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840822 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840841 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840856 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840872 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840889 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840904 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840924 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840939 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840955 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840973 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840989 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841007 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841030 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841058 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841094 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841109 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841125 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841142 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841158 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841173 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841189 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841205 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841219 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841235 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841251 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841267 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841285 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841313 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841332 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840817 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840871 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.840872 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841011 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841040 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841055 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841253 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844778 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841274 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.841352 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:05:45.341332824 +0000 UTC m=+20.502605093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841428 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841482 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841493 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841677 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841695 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841700 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841720 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841867 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841887 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.841959 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842114 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842140 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842203 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842300 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842368 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842405 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842416 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842540 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842566 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842674 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842719 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842763 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842856 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842889 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842911 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.842927 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843026 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843053 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843126 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843252 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843412 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.843761 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844282 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844379 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844585 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844604 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844664 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.844900 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845009 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845035 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845049 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845045 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845129 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845158 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845184 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845207 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845216 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845229 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845252 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845273 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845292 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845314 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845311 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845338 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845341 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845363 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845387 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845409 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845430 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845451 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845473 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845499 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845522 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845544 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845549 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845566 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845595 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845595 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845620 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845644 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845668 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845692 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845715 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845735 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845760 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845781 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845829 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845829 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845871 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845895 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845916 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845937 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845959 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.845979 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846000 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846026 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846047 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846088 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846109 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846130 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846150 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846173 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846197 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846234 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846257 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846281 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846304 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846324 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846344 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846364 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846385 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846406 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846425 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846444 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846463 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846483 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846507 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846542 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846557 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846581 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846603 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846620 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846640 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846664 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846689 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846735 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846756 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846782 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846810 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846828 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846847 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846871 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846895 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846915 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846947 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846933 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.846997 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847022 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847045 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847089 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847109 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847131 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847155 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847177 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847190 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847194 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847377 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847553 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.848809 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.848904 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.849123 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.849394 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.849502 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.849824 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.849946 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850158 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850376 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850488 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850551 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850754 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850798 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.850920 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851004 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851256 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851436 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851601 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851779 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851892 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.851959 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853022 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853038 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853297 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853604 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853635 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853663 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853885 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.853893 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854151 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854168 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854240 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854262 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854396 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854582 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854715 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854812 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.854865 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855044 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855114 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855108 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855299 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855482 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855563 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.855723 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.859460 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.859517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.859762 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.860120 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.861222 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.866363 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.866403 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.868704 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.869444 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870129 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870475 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.847201 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870553 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870574 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870602 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870619 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870635 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870655 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870672 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870689 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870704 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870750 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870768 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870783 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870799 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870814 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870830 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870835 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870846 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870861 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870876 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870891 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870906 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870921 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870936 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870949 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870964 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870979 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870993 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871008 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871024 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871041 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871055 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871085 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871103 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871119 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871134 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871150 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871189 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871211 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871227 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871249 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871268 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871298 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871314 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871304 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871330 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871366 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871388 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871408 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871471 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871492 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871561 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871572 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871660 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871672 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871681 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871691 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871700 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871710 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871719 4680 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871728 4680 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871736 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871745 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871753 4680 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871761 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871771 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871779 4680 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871794 4680 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871803 4680 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871812 4680 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871820 4680 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871835 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871844 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871854 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871863 4680 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871871 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871879 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871888 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871896 4680 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871905 4680 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871915 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871923 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871932 4680 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871940 4680 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871948 4680 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871957 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871965 4680 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871973 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871984 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.871993 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872001 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872010 4680 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872019 4680 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872027 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872036 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872046 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872055 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872080 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872093 4680 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872105 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872114 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872123 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872131 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872140 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872149 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872158 4680 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872167 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872176 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872187 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872196 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872205 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872214 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872222 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872231 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872239 4680 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872247 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872256 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872265 4680 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872273 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872281 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872301 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872310 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872318 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872328 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872336 4680 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872344 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872352 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872360 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872369 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872377 4680 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872386 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872394 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872404 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872413 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872421 4680 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872429 4680 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872438 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872447 4680 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872455 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872465 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872473 4680 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872481 4680 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872490 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872498 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872678 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872689 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872701 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872710 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872719 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872727 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872735 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872719 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872744 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872817 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872829 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872840 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872851 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872862 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872907 4680 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872917 4680 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872927 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872936 4680 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872948 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872958 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872967 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872976 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872985 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.872994 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873007 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873023 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873033 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873042 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873053 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873062 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873137 4680 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873147 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873156 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873165 4680 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873209 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873219 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873318 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873332 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873003 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.870555 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873597 4680 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.873939 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.874087 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.874231 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.874447 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.874738 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.874908 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.875019 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.875377 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.876351 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.876415 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:45.376396571 +0000 UTC m=+20.537668840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.877540 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.877919 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.877988 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.878167 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.878271 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.878284 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.878550 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.878966 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.879099 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.879547 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.879923 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.881818 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.882217 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.883493 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.883887 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.884566 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:45.384549675 +0000 UTC m=+20.545821944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.884942 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.885147 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.885950 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.885953 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.886155 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.886460 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.886759 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.890290 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.890559 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.890977 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.891180 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.891250 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.892162 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.892168 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.892413 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.892912 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.893003 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.897024 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.897170 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.897570 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.897792 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.900404 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.900564 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.903362 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.903400 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.903415 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.903496 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:45.403475988 +0000 UTC m=+20.564748357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.908600 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.908635 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.908648 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:44 crc kubenswrapper[4680]: E0126 16:05:44.908700 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:45.408681568 +0000 UTC m=+20.569953837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.910857 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.910952 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.913298 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.913625 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.914042 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.915332 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.915571 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.916744 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.917037 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.917046 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.917201 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.917049 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.917231 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.917355 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.920151 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.921356 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.921529 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.930511 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.931399 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.941583 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.942698 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dt95s"] Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.942979 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.949183 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.949247 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.949429 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.949697 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.950035 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.953878 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.958176 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.972371 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.973866 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.973983 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974111 4680 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974205 4680 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974300 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974359 4680 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974428 4680 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974495 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974571 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974645 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974705 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974774 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974848 4680 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974921 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.974995 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975049 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975127 4680 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975178 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975282 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975368 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975441 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975494 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975567 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975619 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975725 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975799 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975867 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975919 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.975996 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976063 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976138 4680 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976264 4680 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976343 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976435 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976508 4680 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976564 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976641 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976713 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976766 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976844 4680 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976983 4680 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977082 4680 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977190 4680 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977259 4680 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977316 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977385 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977456 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977508 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977580 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977651 4680 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977722 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977779 4680 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977849 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977927 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977996 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978050 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978124 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978176 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978225 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978308 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978374 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978433 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978484 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978558 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978610 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.978667 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.977016 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.976897 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:44 crc kubenswrapper[4680]: I0126 16:05:44.990355 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.006329 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.012835 4680 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.013734 4680 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.013846 4680 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.013968 4680 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.014054 4680 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.014339 4680 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.014435 4680 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.014525 4680 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.014622 4680 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.014899 4680 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.015003 4680 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.015282 4680 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.015413 4680 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.015532 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.20:54230->38.102.83.20:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e53830c87c940 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:05:25.919230272 +0000 UTC m=+1.080502591,LastTimestamp:2026-01-26 16:05:25.919230272 +0000 UTC m=+1.080502591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.015779 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-dt95s/status\": read tcp 38.102.83.20:54230->38.102.83.20:6443: use of closed network connection" Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.016145 4680 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.029028 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.038216 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.043529 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a50694666afc5138ec0862c7cbfde944c0fb124bbf4f495f687806b74ce2d1a5 WatchSource:0}: Error finding container a50694666afc5138ec0862c7cbfde944c0fb124bbf4f495f687806b74ce2d1a5: Status 404 returned error can't find the container with id a50694666afc5138ec0862c7cbfde944c0fb124bbf4f495f687806b74ce2d1a5 Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.044840 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:05:45 crc kubenswrapper[4680]: W0126 16:05:45.056783 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9e48a21e09c088d864cecf38dbacc391d6ae4b807715533eec83e8323e7306d4 WatchSource:0}: Error finding container 9e48a21e09c088d864cecf38dbacc391d6ae4b807715533eec83e8323e7306d4: Status 404 returned error can't find the container with id 9e48a21e09c088d864cecf38dbacc391d6ae4b807715533eec83e8323e7306d4 Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.058788 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.079436 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04527fbd-5b7b-40c2-b752-616f569e966a-hosts-file\") pod \"node-resolver-dt95s\" (UID: \"04527fbd-5b7b-40c2-b752-616f569e966a\") " pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.079679 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8qc\" (UniqueName: \"kubernetes.io/projected/04527fbd-5b7b-40c2-b752-616f569e966a-kube-api-access-wm8qc\") pod \"node-resolver-dt95s\" (UID: \"04527fbd-5b7b-40c2-b752-616f569e966a\") " pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.089496 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.102417 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.111825 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.115496 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:21:46.209383361 +0000 UTC Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.127331 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.172199 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.172715 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.173578 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.174278 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.174807 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.175278 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.175921 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.176479 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.177060 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.177583 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.178044 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.180637 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.180864 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04527fbd-5b7b-40c2-b752-616f569e966a-hosts-file\") pod \"node-resolver-dt95s\" (UID: \"04527fbd-5b7b-40c2-b752-616f569e966a\") " pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.180907 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm8qc\" (UniqueName: \"kubernetes.io/projected/04527fbd-5b7b-40c2-b752-616f569e966a-kube-api-access-wm8qc\") pod \"node-resolver-dt95s\" (UID: \"04527fbd-5b7b-40c2-b752-616f569e966a\") " pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.180960 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04527fbd-5b7b-40c2-b752-616f569e966a-hosts-file\") pod \"node-resolver-dt95s\" (UID: \"04527fbd-5b7b-40c2-b752-616f569e966a\") " pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.181335 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.182265 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.182754 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.183829 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.184505 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.184863 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.186022 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.186764 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.187788 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.188519 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.189009 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.190060 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.190454 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.191496 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.192223 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.192675 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.193361 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.193967 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.194476 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.194967 4680 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.195127 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.196467 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.197053 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.197495 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.200479 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.201532 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.201610 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm8qc\" (UniqueName: \"kubernetes.io/projected/04527fbd-5b7b-40c2-b752-616f569e966a-kube-api-access-wm8qc\") pod \"node-resolver-dt95s\" (UID: \"04527fbd-5b7b-40c2-b752-616f569e966a\") " pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.202088 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.203264 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.203924 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.204448 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.204538 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.205491 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.206491 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.207521 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.210720 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.211324 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.212107 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.212998 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.213590 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.218271 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.222883 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.223658 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.226470 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.226940 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.239362 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.258366 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.264248 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dt95s" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.286611 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.304329 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9e48a21e09c088d864cecf38dbacc391d6ae4b807715533eec83e8323e7306d4"} Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.304944 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"27ecececf99aac22bc8b34c5f7c8de7815df726b5862478ccce81ba158dc450d"} Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.308690 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a50694666afc5138ec0862c7cbfde944c0fb124bbf4f495f687806b74ce2d1a5"} Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.312800 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.328255 4680 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.341793 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.363385 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.377956 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.382237 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.382321 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.382415 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.382432 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:05:46.382409371 +0000 UTC m=+21.543681640 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.382461 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:46.382448772 +0000 UTC m=+21.543721031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.483534 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.483576 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.483592 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484192 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484214 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484224 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484263 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:46.484250166 +0000 UTC m=+21.645522435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484313 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484324 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484332 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484354 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:46.484347909 +0000 UTC m=+21.645620178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484383 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: E0126 16:05:45.484400 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:46.48439498 +0000 UTC m=+21.645667249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.797337 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 16:00:44 +0000 UTC, rotation deadline is 2026-11-05 01:08:14.266435125 +0000 UTC Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.797431 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6777h2m28.469009375s for next certificate rotation Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.863634 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.901378 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 16:05:45 crc kubenswrapper[4680]: I0126 16:05:45.961851 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.011236 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.025386 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.115636 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:06:05.307210192 +0000 UTC Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.165976 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.169444 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.169573 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.169525 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.169760 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.169888 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.169974 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.260932 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.314631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dt95s" event={"ID":"04527fbd-5b7b-40c2-b752-616f569e966a","Type":"ContainerStarted","Data":"fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0"} Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.314684 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dt95s" event={"ID":"04527fbd-5b7b-40c2-b752-616f569e966a","Type":"ContainerStarted","Data":"7d0958ac161b5c3b449b3dd7b8510644ac997b37eefdc6a5951b1383a5b7ac74"} Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.316935 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf"} Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.316974 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f"} Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.319196 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8"} Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.340477 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.348992 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lqgn2"] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.349346 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.355715 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qr4fm"] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.356150 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.357362 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.359874 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.360766 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.361989 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.362079 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.361997 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.362523 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j2vl"] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.363526 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.363604 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.363841 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.364118 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.364296 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.368373 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mp72c"] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.369122 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.369607 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.369658 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.370000 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.370005 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.373729 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.374207 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.374611 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.374644 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.374961 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.378106 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.393187 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.393338 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.394043 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:05:48.394027312 +0000 UTC m=+23.555299581 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.397178 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.397686 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.397847 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:48.397815811 +0000 UTC m=+23.559088080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.421147 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.437421 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.467441 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494507 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-kubelet\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494550 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-hostroot\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494649 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-systemd-units\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494718 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-node-log\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494764 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-log-socket\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494786 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovn-node-metrics-cert\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494859 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.494960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-os-release\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495015 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-cni-bin\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495083 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-daemon-config\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495112 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-var-lib-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495133 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495158 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495182 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495224 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495162 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-script-lib\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495328 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:48.495255659 +0000 UTC m=+23.656528018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495373 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hh5k\" (UniqueName: \"kubernetes.io/projected/9ac04312-7b74-4193-9b93-b54b91bab69b-kube-api-access-4hh5k\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495409 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495540 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495483 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495638 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495690 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-netns\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495711 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495733 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495744 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495758 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:48.495745734 +0000 UTC m=+23.657018073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:46 crc kubenswrapper[4680]: E0126 16:05:46.495781 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:48.495767594 +0000 UTC m=+23.657039863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495796 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4cbae131-7d55-4573-b849-5a223c64ffa7-rootfs\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495840 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-systemd\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495863 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-config\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495927 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ac04312-7b74-4193-9b93-b54b91bab69b-cni-binary-copy\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495954 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-system-cni-dir\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.495995 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zzv\" (UniqueName: \"kubernetes.io/projected/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-kube-api-access-k2zzv\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496020 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-system-cni-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496042 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-socket-dir-parent\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496094 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-cnibin\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496137 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-etc-kubernetes\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496166 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-netns\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496196 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-ovn-kubernetes\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496245 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496268 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-conf-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496310 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-env-overrides\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496336 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cnibin\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496388 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-cni-multus\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496414 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-slash\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496435 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496477 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-cni-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496497 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-os-release\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496516 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-ovn\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496560 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cni-binary-copy\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496597 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-kubelet\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-bin\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496715 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cbae131-7d55-4573-b849-5a223c64ffa7-proxy-tls\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496748 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbae131-7d55-4573-b849-5a223c64ffa7-mcd-auth-proxy-config\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496778 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25v8\" (UniqueName: \"kubernetes.io/projected/4cbae131-7d55-4573-b849-5a223c64ffa7-kube-api-access-t25v8\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496801 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-k8s-cni-cncf-io\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496841 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-multus-certs\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496890 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxrq\" (UniqueName: \"kubernetes.io/projected/f8b202a9-2dd7-4e9d-a072-c51433d3596f-kube-api-access-vtxrq\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496919 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-etc-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.496970 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-netd\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.499398 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.504409 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.523262 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.533002 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.555533 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.574446 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.587965 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.597636 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ac04312-7b74-4193-9b93-b54b91bab69b-cni-binary-copy\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.597899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-system-cni-dir\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.597987 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zzv\" (UniqueName: \"kubernetes.io/projected/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-kube-api-access-k2zzv\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598083 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-system-cni-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-socket-dir-parent\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598286 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598381 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-cnibin\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598482 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-etc-kubernetes\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598566 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598033 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-system-cni-dir\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598203 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-system-cni-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598258 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-socket-dir-parent\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598479 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-cnibin\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598522 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-etc-kubernetes\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598286 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ac04312-7b74-4193-9b93-b54b91bab69b-cni-binary-copy\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598737 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-netns\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.598912 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-netns\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599014 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-ovn-kubernetes\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599107 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-ovn-kubernetes\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599159 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-conf-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599345 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-env-overrides\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599449 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cnibin\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599518 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cnibin\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-conf-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599685 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-cni-multus\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599790 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-slash\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599915 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600007 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-slash\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.599698 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-cni-multus\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600028 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600190 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-env-overrides\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600015 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cni-binary-copy\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600345 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-cni-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600446 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-os-release\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-ovn\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600646 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-bin\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600728 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-os-release\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600610 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-ovn\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600572 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600573 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-cni-dir\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600736 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-bin\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.600703 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cni-binary-copy\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601042 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-kubelet\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601169 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cbae131-7d55-4573-b849-5a223c64ffa7-proxy-tls\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601285 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbae131-7d55-4573-b849-5a223c64ffa7-mcd-auth-proxy-config\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601384 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25v8\" (UniqueName: \"kubernetes.io/projected/4cbae131-7d55-4573-b849-5a223c64ffa7-kube-api-access-t25v8\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601483 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-k8s-cni-cncf-io\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601567 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-multus-certs\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601658 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-netd\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601117 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-kubelet\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601614 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-multus-certs\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601699 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-netd\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601578 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-k8s-cni-cncf-io\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601753 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxrq\" (UniqueName: \"kubernetes.io/projected/f8b202a9-2dd7-4e9d-a072-c51433d3596f-kube-api-access-vtxrq\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601849 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-etc-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601879 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-kubelet\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601904 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-hostroot\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601926 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-systemd-units\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601952 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-node-log\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601976 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-log-socket\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.601998 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovn-node-metrics-cert\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602038 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-os-release\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602060 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-cni-bin\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602099 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-daemon-config\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602105 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbae131-7d55-4573-b849-5a223c64ffa7-mcd-auth-proxy-config\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602120 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-var-lib-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602142 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602147 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-node-log\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602167 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-script-lib\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602184 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-etc-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602191 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602214 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hh5k\" (UniqueName: \"kubernetes.io/projected/9ac04312-7b74-4193-9b93-b54b91bab69b-kube-api-access-4hh5k\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602227 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-kubelet\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-hostroot\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602258 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-netns\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-systemd-units\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602295 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4cbae131-7d55-4573-b849-5a223c64ffa7-rootfs\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602316 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-systemd\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602339 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-config\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602581 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-log-socket\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602634 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-var-lib-cni-bin\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602659 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602656 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-os-release\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602683 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-var-lib-openvswitch\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602709 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ac04312-7b74-4193-9b93-b54b91bab69b-host-run-netns\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602749 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9ac04312-7b74-4193-9b93-b54b91bab69b-multus-daemon-config\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602800 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4cbae131-7d55-4573-b849-5a223c64ffa7-rootfs\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602835 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-systemd\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.602898 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-config\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.603330 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.603388 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-script-lib\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.611501 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cbae131-7d55-4573-b849-5a223c64ffa7-proxy-tls\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.617969 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovn-node-metrics-cert\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.629598 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25v8\" (UniqueName: \"kubernetes.io/projected/4cbae131-7d55-4573-b849-5a223c64ffa7-kube-api-access-t25v8\") pod \"machine-config-daemon-qr4fm\" (UID: \"4cbae131-7d55-4573-b849-5a223c64ffa7\") " pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.633095 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zzv\" (UniqueName: \"kubernetes.io/projected/86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6-kube-api-access-k2zzv\") pod \"multus-additional-cni-plugins-mp72c\" (UID: \"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\") " pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.633672 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hh5k\" (UniqueName: \"kubernetes.io/projected/9ac04312-7b74-4193-9b93-b54b91bab69b-kube-api-access-4hh5k\") pod \"multus-lqgn2\" (UID: \"9ac04312-7b74-4193-9b93-b54b91bab69b\") " pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.638787 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxrq\" (UniqueName: \"kubernetes.io/projected/f8b202a9-2dd7-4e9d-a072-c51433d3596f-kube-api-access-vtxrq\") pod \"ovnkube-node-5j2vl\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.654219 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.663093 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqgn2" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.675139 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:05:46 crc kubenswrapper[4680]: W0126 16:05:46.679322 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac04312_7b74_4193_9b93_b54b91bab69b.slice/crio-e96f75a85fc4960ddd41e8710ca0c6316cb17d2c87025aa9fabb8555a5ab2751 WatchSource:0}: Error finding container e96f75a85fc4960ddd41e8710ca0c6316cb17d2c87025aa9fabb8555a5ab2751: Status 404 returned error can't find the container with id e96f75a85fc4960ddd41e8710ca0c6316cb17d2c87025aa9fabb8555a5ab2751 Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.684983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.697028 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.705213 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mp72c" Jan 26 16:05:46 crc kubenswrapper[4680]: W0126 16:05:46.728756 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d42ee6_ef5f_4c64_b5ff_bb13c0dbbbb6.slice/crio-1fc4d862b0052c647a7c58a602c6e2d012e22990ebe0047d3acf4ff7296abf05 WatchSource:0}: Error finding container 1fc4d862b0052c647a7c58a602c6e2d012e22990ebe0047d3acf4ff7296abf05: Status 404 returned error can't find the container with id 1fc4d862b0052c647a7c58a602c6e2d012e22990ebe0047d3acf4ff7296abf05 Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.730542 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.750248 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.769263 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.784355 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.797748 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.822994 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.851105 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.876317 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.910015 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.943901 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:46 crc kubenswrapper[4680]: I0126 16:05:46.963144 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.116328 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:14:00.374764914 +0000 UTC Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.322897 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" exitCode=0 Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.322952 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.322976 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"ed596967c76a6dce0921c4f8ea9429ede481ad9e460e0cd7af85d9121a0d0efb"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.327193 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.327244 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.327254 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"c2dbc141abdf1600a99083660cb1626fe9640763aaba0472c73ad37ee038fc51"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.333049 4680 generic.go:334] "Generic (PLEG): container finished" podID="86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6" containerID="8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96" exitCode=0 Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.333106 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerDied","Data":"8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.333156 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerStarted","Data":"1fc4d862b0052c647a7c58a602c6e2d012e22990ebe0047d3acf4ff7296abf05"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.335691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerStarted","Data":"5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.335721 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerStarted","Data":"e96f75a85fc4960ddd41e8710ca0c6316cb17d2c87025aa9fabb8555a5ab2751"} Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.376678 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.391699 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.407606 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.425302 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.439659 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.452523 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.469361 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.481395 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.493362 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.507129 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.525848 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.545622 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.559904 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.571600 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.581423 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.594943 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.606865 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.621270 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.630335 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.642134 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.652094 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.662129 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.672248 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.693756 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.710788 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:47 crc kubenswrapper[4680]: I0126 16:05:47.724771 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.058978 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.061838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.061878 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.061887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.061986 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.069052 4680 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.069334 4680 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.070233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.070259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.070266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.070280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.070289 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.086413 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.089441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.089466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.089475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.089488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.089498 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.099761 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.103024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.103082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.103097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.103115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.103127 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.116619 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:13:55.654980294 +0000 UTC Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.116828 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.120524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.120564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.120578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.120595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.120608 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.141333 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.147674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.147710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.147720 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.147736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.147745 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.169300 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.169339 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.169358 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.169756 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.169731 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.169879 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.171443 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.171594 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.173730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.173815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.173827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.173844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.173853 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.286179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.286222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.286235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.286254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.286266 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.347820 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.347881 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.347893 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.347907 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.347918 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.350833 4680 generic.go:334] "Generic (PLEG): container finished" podID="86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6" containerID="ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94" exitCode=0 Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.350902 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerDied","Data":"ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.353627 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.365650 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.378380 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.388311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.388342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.388351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.388363 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.388372 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.390947 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.405493 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.418468 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.418607 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.418718 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.418776 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:52.418758818 +0000 UTC m=+27.580031087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.418813 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:05:52.418787289 +0000 UTC m=+27.580059598 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.418903 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.430148 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.443373 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.458535 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.468856 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.482281 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.491037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.491090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.491101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.491115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.491125 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.512493 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.519613 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.519651 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.519680 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.519818 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.519911 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:52.519887172 +0000 UTC m=+27.681159441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520010 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520025 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520040 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520098 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:52.520088528 +0000 UTC m=+27.681360797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520132 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520154 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520166 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:48 crc kubenswrapper[4680]: E0126 16:05:48.520221 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:05:52.520205371 +0000 UTC m=+27.681477760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.533024 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.552822 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.563794 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.577178 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.588923 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.595089 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.595132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.595143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.595160 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.595175 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.602926 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.613972 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.623170 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.634871 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.646117 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.659189 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.670311 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.686400 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.697138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.697170 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.697178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.697200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.697211 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.700213 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.719164 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.799027 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.799216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.799333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.799430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.799502 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.902269 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.902300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.902308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.902324 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:48 crc kubenswrapper[4680]: I0126 16:05:48.902333 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:48Z","lastTransitionTime":"2026-01-26T16:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.004980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.005018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.005027 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.005042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.005052 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.088340 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8ftvt"] Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.088660 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.090572 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.090984 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.091350 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.092280 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.104295 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.106968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.106998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.107008 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.107025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.107037 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.116226 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.117113 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:20:27.272152126 +0000 UTC Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.127846 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.147974 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.167187 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.182915 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.201088 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.209718 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.209757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.209768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.209784 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.209794 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.214286 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.227489 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-serviceca\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.227551 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-host\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.227581 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnlz8\" (UniqueName: \"kubernetes.io/projected/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-kube-api-access-hnlz8\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.228157 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.238916 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.255154 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.266594 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.275948 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.287647 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.326963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.327009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.327019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.327034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.327043 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.328238 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-host\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.328295 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnlz8\" (UniqueName: \"kubernetes.io/projected/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-kube-api-access-hnlz8\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.328339 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-host\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.328355 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-serviceca\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.329772 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-serviceca\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.345826 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnlz8\" (UniqueName: \"kubernetes.io/projected/f5bd0b77-69ce-4f27-a3cb-1d55d7942f41-kube-api-access-hnlz8\") pod \"node-ca-8ftvt\" (UID: \"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\") " pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.359496 4680 generic.go:334] "Generic (PLEG): container finished" podID="86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6" containerID="c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063" exitCode=0 Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.359561 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerDied","Data":"c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.368905 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.397804 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.405821 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8ftvt" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.407674 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: W0126 16:05:49.418009 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5bd0b77_69ce_4f27_a3cb_1d55d7942f41.slice/crio-9cb4ef32297dacee3ba077a1092255dc8664f9cf6a1707ab1182611c2c7b72a0 WatchSource:0}: Error finding container 9cb4ef32297dacee3ba077a1092255dc8664f9cf6a1707ab1182611c2c7b72a0: Status 404 returned error can't find the container with id 9cb4ef32297dacee3ba077a1092255dc8664f9cf6a1707ab1182611c2c7b72a0 Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.423668 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.429148 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.429182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.429193 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.429209 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.429220 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.442019 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.454250 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.472563 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.482113 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.501729 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.526857 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.532495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.532545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.532579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.532598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.532609 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.546563 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.555707 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.567293 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.579182 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.595803 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.636087 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.636123 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.636133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.636166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.636178 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.740012 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.740045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.740054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.740097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.740111 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.842501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.842523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.842530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.842543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.842551 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.944468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.944510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.944521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.944536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:49 crc kubenswrapper[4680]: I0126 16:05:49.944546 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:49Z","lastTransitionTime":"2026-01-26T16:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.046545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.046591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.046608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.046630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.046650 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.118259 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:59:28.62975512 +0000 UTC Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.149338 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.149382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.149393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.149419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.149434 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.168911 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:50 crc kubenswrapper[4680]: E0126 16:05:50.169041 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.168929 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.168930 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:50 crc kubenswrapper[4680]: E0126 16:05:50.169239 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:50 crc kubenswrapper[4680]: E0126 16:05:50.169331 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.252293 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.252366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.252380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.252399 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.252414 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.355441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.355507 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.355521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.355571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.355588 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.374821 4680 generic.go:334] "Generic (PLEG): container finished" podID="86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6" containerID="b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316" exitCode=0 Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.375047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerDied","Data":"b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.376600 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8ftvt" event={"ID":"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41","Type":"ContainerStarted","Data":"90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.376623 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8ftvt" event={"ID":"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41","Type":"ContainerStarted","Data":"9cb4ef32297dacee3ba077a1092255dc8664f9cf6a1707ab1182611c2c7b72a0"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.392390 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.420382 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.447180 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.458970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.459023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.459039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.459105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.459119 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.475959 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.496751 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.512098 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.526395 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.544060 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.559426 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.562173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.562206 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.562214 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.562229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.562240 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.564290 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.567490 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.576245 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.601090 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.614515 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.633263 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.651109 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.665831 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.665895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.665909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.665929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.665940 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.666438 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.682058 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.695111 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.707184 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.719724 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.732489 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.745841 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.759241 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.769403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.769439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.769453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.769474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.769485 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.778127 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.792398 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.804969 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.828242 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.840261 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.850553 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.864652 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.872396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.872436 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.872445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.872465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.872476 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.880885 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:50Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.976316 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.976369 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.976382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.976424 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:50 crc kubenswrapper[4680]: I0126 16:05:50.976464 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:50Z","lastTransitionTime":"2026-01-26T16:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.085179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.085702 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.085721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.085752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.085772 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.118420 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:32:49.096790252 +0000 UTC Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.188954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.189001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.189014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.189031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.189040 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.295688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.295733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.295748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.295771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.295785 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.393377 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.397954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.398007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.398020 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.398041 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.398054 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.399165 4680 generic.go:334] "Generic (PLEG): container finished" podID="86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6" containerID="2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a" exitCode=0 Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.399263 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerDied","Data":"2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.418683 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.436198 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.449356 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.472750 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.494115 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.500598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.500653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.500663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.500678 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.500688 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.512509 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.527344 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.541726 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.561365 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.573767 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.585234 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.596094 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.604557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.604628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.604643 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.604664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.604680 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.609862 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.620130 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.636192 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:51Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.707694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.707729 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.707740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.707759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.707771 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.810440 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.810474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.810485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.810503 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.810515 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.913654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.913694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.913702 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.913717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:51 crc kubenswrapper[4680]: I0126 16:05:51.913726 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:51Z","lastTransitionTime":"2026-01-26T16:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.016024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.016117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.016132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.016161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.016179 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.118582 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:35:50.130325203 +0000 UTC Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.119040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.119097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.119139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.119159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.119170 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.170350 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.170412 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.170475 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.170608 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.170697 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.170781 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.220912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.220956 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.220969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.220984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.220993 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.324122 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.324160 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.324168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.324182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.324192 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.406202 4680 generic.go:334] "Generic (PLEG): container finished" podID="86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6" containerID="32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732" exitCode=0 Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.406274 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerDied","Data":"32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.424450 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.436877 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.436951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.436965 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.436984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.437003 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.440622 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.461582 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.468437 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.468557 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.468673 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.468720 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:00.468708761 +0000 UTC m=+35.629981030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.469036 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:06:00.4690278 +0000 UTC m=+35.630300069 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.487012 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.501833 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.515339 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.536113 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.542199 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.542235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.542244 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.542259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.542269 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.550527 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.560663 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.569779 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.569821 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.569848 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570178 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570209 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570223 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570271 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:00.570255887 +0000 UTC m=+35.731528156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570581 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570603 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570620 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:00.570609917 +0000 UTC m=+35.731882186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570620 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570636 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:52 crc kubenswrapper[4680]: E0126 16:05:52.570664 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:00.570656449 +0000 UTC m=+35.731928708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.575288 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.587418 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.604198 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.618037 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.626780 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.641472 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:52Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.644216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.644238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.644248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.644292 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.644303 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.747554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.747580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.747588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.747602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.747611 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.850292 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.850345 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.850357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.850373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.850384 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.952862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.952928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.952942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.952959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:52 crc kubenswrapper[4680]: I0126 16:05:52.952970 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:52Z","lastTransitionTime":"2026-01-26T16:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.055284 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.055334 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.055345 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.055367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.055453 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.119421 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:09:13.485022171 +0000 UTC Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.158402 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.158456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.158465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.158485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.158496 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.260821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.260907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.260916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.260931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.260939 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.363319 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.363355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.363365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.363381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.363392 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.413642 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.413907 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.413983 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.414085 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.422387 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" event={"ID":"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6","Type":"ContainerStarted","Data":"771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.429778 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.436896 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.443444 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.444346 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.459225 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.466944 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.467324 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.467375 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.467384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.467398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.467407 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.479442 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.492181 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.501127 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.511263 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.520358 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.529883 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.542272 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.556449 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.569245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.569288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.569297 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.569312 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.569323 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.575489 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.591716 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.605736 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.617808 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.635650 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.652059 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.664932 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.671222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.671258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.671267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.671282 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.671291 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.677438 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.689348 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.700386 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.709688 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.723454 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.736246 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.746906 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.758619 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.768947 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.774007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.774044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.774053 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.774104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.774116 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.778776 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.788816 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:53Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.876895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.877305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.877450 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.877627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.877760 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.980844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.980900 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.980920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.980943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:53 crc kubenswrapper[4680]: I0126 16:05:53.980960 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:53Z","lastTransitionTime":"2026-01-26T16:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.084236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.084779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.084957 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.085182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.085314 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.120265 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:21:50.630667295 +0000 UTC Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.169150 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.169232 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:54 crc kubenswrapper[4680]: E0126 16:05:54.169349 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:54 crc kubenswrapper[4680]: E0126 16:05:54.169617 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.169821 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:54 crc kubenswrapper[4680]: E0126 16:05:54.170136 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.189842 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.189892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.189910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.189935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.189954 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.293459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.293646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.293719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.293807 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.293839 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.397181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.397277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.397341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.397370 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.397387 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.500013 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.500110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.500137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.500165 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.500183 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.602619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.603110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.603269 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.603413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.603539 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.707902 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.708491 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.708659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.708838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.709007 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.820514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.820792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.820928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.821020 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.821111 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.927801 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.928434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.928510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.928591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:54 crc kubenswrapper[4680]: I0126 16:05:54.928663 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:54Z","lastTransitionTime":"2026-01-26T16:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.030877 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.031171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.031239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.031298 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.031362 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.121205 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:37:44.709018088 +0000 UTC Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.135090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.135127 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.135135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.135150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.135160 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.197321 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.235018 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.237152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.237180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.237189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.237218 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.237227 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.266880 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.281753 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.293024 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.306918 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.318589 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.338604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.338641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.338651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.338665 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.338674 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.342301 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.357801 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.369006 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.399683 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.411978 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.427156 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.446012 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.446054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.446078 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.446092 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.446102 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.447591 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.463159 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.548512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.548549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.548558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.548572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.548582 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.651313 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.651357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.651368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.651386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.651397 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.754437 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.754485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.754500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.754519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.754531 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.858294 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.858371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.858390 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.858421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.858443 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.962824 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.962871 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.962881 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.962899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:55 crc kubenswrapper[4680]: I0126 16:05:55.962911 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:55Z","lastTransitionTime":"2026-01-26T16:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.086479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.086575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.086598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.086633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.086661 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.121961 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:42:27.978278851 +0000 UTC Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.169037 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.169137 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.169036 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:56 crc kubenswrapper[4680]: E0126 16:05:56.169453 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:56 crc kubenswrapper[4680]: E0126 16:05:56.169560 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:56 crc kubenswrapper[4680]: E0126 16:05:56.169736 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.190314 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.190367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.190377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.190396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.190410 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.292463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.292507 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.292518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.292536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.292545 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.394413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.394466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.394480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.394501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.394516 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.448887 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/0.log" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.452251 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621" exitCode=1 Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.452289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.452924 4680 scope.go:117] "RemoveContainer" containerID="90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.472171 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.487505 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.497341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.497384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.497395 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.497411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.497421 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.503640 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.515853 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.534773 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.550791 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.566511 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.584584 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.598119 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.600540 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.600572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.600581 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.600600 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.600612 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.609891 4680 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.611114 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.622213 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.634280 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.656453 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.675470 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.689707 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.703658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.703887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.703990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.704149 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.704266 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.808208 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.808242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.808251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.808265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.808273 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.911481 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.911522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.911534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.911557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:56 crc kubenswrapper[4680]: I0126 16:05:56.911572 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:56Z","lastTransitionTime":"2026-01-26T16:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.014264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.014308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.014322 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.014342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.014356 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.117471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.117517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.117529 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.117548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.117562 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.122627 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:41:46.204598103 +0000 UTC Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.220321 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.220372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.220389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.220414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.220431 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.323679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.323734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.323750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.323777 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.323793 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.427213 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.427253 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.427266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.427289 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.427305 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.459981 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/0.log" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.462856 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.463925 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.480320 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.491199 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.506281 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.520905 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.529684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.529736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.529746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.529764 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.529777 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.536927 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.549633 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.562952 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.576747 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.589591 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.611100 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.632549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.632601 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.632614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.632639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.632650 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.635484 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.650919 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.668548 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.686756 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.701588 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.735380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.735426 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.735438 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.735457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.735469 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.837737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.837767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.837777 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.837791 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.837805 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.940696 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.940733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.940745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.940763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:57 crc kubenswrapper[4680]: I0126 16:05:57.940774 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:57Z","lastTransitionTime":"2026-01-26T16:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.043110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.043170 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.043188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.043214 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.043248 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.123282 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:46:31.085221536 +0000 UTC Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.146014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.146047 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.146263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.146280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.146291 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.169109 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.169145 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.169239 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.169116 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.169334 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.169401 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.249220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.249251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.249259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.249273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.249281 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.351796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.351853 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.351875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.351920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.351943 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.421761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.421894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.421921 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.421952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.421971 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.442292 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.455153 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.455201 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.455221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.455241 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.455257 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.477012 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.481589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.481654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.481667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.481698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.481713 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.497412 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.502309 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.502376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.502389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.502408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.502420 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.519699 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.524304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.524374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.524386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.524411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.524428 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.539724 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:58 crc kubenswrapper[4680]: E0126 16:05:58.539892 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.542031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.542142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.542185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.542213 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.542227 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.644880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.644947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.644966 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.644992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.645010 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.748677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.748728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.748742 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.748763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.748779 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.852110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.852173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.852194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.852222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.852241 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.955471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.955560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.955579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.955608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:58 crc kubenswrapper[4680]: I0126 16:05:58.955628 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:58Z","lastTransitionTime":"2026-01-26T16:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.059336 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.059400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.059420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.059448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.059468 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.123763 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:41:51.028434333 +0000 UTC Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.162493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.162589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.162613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.162650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.162674 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.200818 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf"] Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.201621 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.206444 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.206544 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.221375 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.237467 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.254192 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265671 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265791 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265863 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265917 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfhl\" (UniqueName: \"kubernetes.io/projected/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-kube-api-access-9gfhl\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.265969 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.269301 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.289393 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.307582 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.325778 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.340737 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.353483 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.367056 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.367182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gfhl\" (UniqueName: \"kubernetes.io/projected/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-kube-api-access-9gfhl\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.367271 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.367319 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.367304 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.368105 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.368452 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.369589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.369638 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.369659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.369691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.369713 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.381085 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.384616 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.391467 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gfhl\" (UniqueName: \"kubernetes.io/projected/c9dc4863-2cc9-49db-9d40-2b1d04bddea3-kube-api-access-9gfhl\") pod \"ovnkube-control-plane-749d76644c-rpcvf\" (UID: \"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.400823 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.415135 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.427121 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.437544 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.452153 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.471758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.471815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.471836 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.471865 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.471888 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.472174 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/1.log" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.473001 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/0.log" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.476689 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035" exitCode=1 Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.476727 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.476795 4680 scope.go:117] "RemoveContainer" containerID="90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.477605 4680 scope.go:117] "RemoveContainer" containerID="ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035" Jan 26 16:05:59 crc kubenswrapper[4680]: E0126 16:05:59.477765 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.499741 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.516188 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.518510 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.542871 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.560762 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.575726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.576201 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.576430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.576646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.576858 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.595258 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.611868 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.631734 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.649534 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.663259 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.679409 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.679445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.679457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.679479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.679492 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.682227 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.697517 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.710812 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.724956 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.766036 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.781532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.781589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.781599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.781639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.781652 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.792358 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.809456 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.883553 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.883590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.883600 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.883616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.883627 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.958638 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fbl6p"] Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.959136 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:05:59 crc kubenswrapper[4680]: E0126 16:05:59.959198 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.975868 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.978225 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.978307 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdct\" (UniqueName: \"kubernetes.io/projected/40816c76-44c8-4161-84f3-b1693d48aeaa-kube-api-access-vcdct\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.986653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.986679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.986690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.986707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.986718 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:05:59Z","lastTransitionTime":"2026-01-26T16:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:05:59 crc kubenswrapper[4680]: I0126 16:05:59.993176 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.007013 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.031582 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.046402 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.064588 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.076666 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.084488 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdct\" (UniqueName: \"kubernetes.io/projected/40816c76-44c8-4161-84f3-b1693d48aeaa-kube-api-access-vcdct\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.084557 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.084765 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.084882 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:06:00.584855706 +0000 UTC m=+35.746127975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.088671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.088701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.088710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.088726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.088735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.095853 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.101599 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdct\" (UniqueName: \"kubernetes.io/projected/40816c76-44c8-4161-84f3-b1693d48aeaa-kube-api-access-vcdct\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.114376 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.124612 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:34:42.370651826 +0000 UTC Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.129536 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.142056 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.153946 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.169166 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.169231 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.169262 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.169462 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.169666 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.169756 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.179245 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.191481 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.191589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.191663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.191734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.191811 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.196202 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.208742 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.224850 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.239865 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.294785 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.294960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.294975 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.294999 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.295016 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.398446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.398531 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.398546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.398574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.398611 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.483725 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" event={"ID":"c9dc4863-2cc9-49db-9d40-2b1d04bddea3","Type":"ContainerStarted","Data":"8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.483888 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" event={"ID":"c9dc4863-2cc9-49db-9d40-2b1d04bddea3","Type":"ContainerStarted","Data":"385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.483914 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" event={"ID":"c9dc4863-2cc9-49db-9d40-2b1d04bddea3","Type":"ContainerStarted","Data":"7f0415fb112ba96e732703c4d6a7b4705d1ff26bf1b9147ada24c1a3b560f937"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.486477 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/1.log" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.488579 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.488701 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:06:16.488673862 +0000 UTC m=+51.649946311 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.488895 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.489080 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.489138 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:16.489129385 +0000 UTC m=+51.650401654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.500968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.501018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.501029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.501047 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.501060 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.510196 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.530532 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.554518 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.576928 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.589997 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.590058 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590203 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590299 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:16.59027079 +0000 UTC m=+51.751543099 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.590208 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590338 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590358 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.590365 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590374 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590452 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:16.590438955 +0000 UTC m=+51.751711264 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590489 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590541 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590602 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590729 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:16.590700372 +0000 UTC m=+51.751972671 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.590915 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: E0126 16:06:00.591041 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:06:01.590998311 +0000 UTC m=+36.752270620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.605021 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.605107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.605138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.605168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.605186 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.607821 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.639727 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.656150 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.667758 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.684506 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.700260 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.708501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.708583 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.708603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.708638 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.708658 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.717028 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.728946 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.744555 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.793289 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.806548 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.811809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.811868 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.811884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.811907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.811926 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.818126 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.830099 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.918545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.918594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.918609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.918628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:00 crc kubenswrapper[4680]: I0126 16:06:00.918645 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:00Z","lastTransitionTime":"2026-01-26T16:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.021318 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.021393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.021403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.021420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.021429 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.123976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.124272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.124351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.124421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.124493 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.124888 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:22:32.770892856 +0000 UTC Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.226867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.226912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.226922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.226937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.226947 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.331315 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.331357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.331403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.331421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.331486 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.435180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.435526 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.435630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.435749 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.435878 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.538372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.538412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.538423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.538439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.538448 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.603670 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:01 crc kubenswrapper[4680]: E0126 16:06:01.603799 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:01 crc kubenswrapper[4680]: E0126 16:06:01.603846 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:06:03.603832667 +0000 UTC m=+38.765104936 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.641813 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.641864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.641876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.641896 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.641909 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.744520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.744949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.745385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.745550 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.745696 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.847921 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.847960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.847972 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.847991 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.848002 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.951380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.951421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.951430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.951445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:01 crc kubenswrapper[4680]: I0126 16:06:01.951454 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:01Z","lastTransitionTime":"2026-01-26T16:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.054320 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.054565 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.054645 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.054749 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.054837 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.126025 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:02:51.330594524 +0000 UTC Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.157299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.157366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.157384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.157422 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.157443 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.169601 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.169640 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.170049 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:02 crc kubenswrapper[4680]: E0126 16:06:02.170236 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:02 crc kubenswrapper[4680]: E0126 16:06:02.170364 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.170268 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:02 crc kubenswrapper[4680]: E0126 16:06:02.170802 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:02 crc kubenswrapper[4680]: E0126 16:06:02.170898 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.260852 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.260892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.260901 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.260916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.260929 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.364118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.364417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.364603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.365150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.365338 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.468572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.468804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.468912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.469003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.469103 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.571815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.571915 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.571923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.571936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.571944 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.674564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.675020 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.675261 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.675493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.675847 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.778941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.779374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.779591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.779800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.780010 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.883003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.883035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.883050 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.883065 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.883095 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.985472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.985520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.985538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.985560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:02 crc kubenswrapper[4680]: I0126 16:06:02.985576 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:02Z","lastTransitionTime":"2026-01-26T16:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.088506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.088538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.088548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.088562 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.088571 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.126632 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:30:26.232367704 +0000 UTC Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.190680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.190724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.190739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.190760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.190774 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.294505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.294554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.294567 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.294585 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.294597 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.396913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.396971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.396988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.397015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.397031 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.499630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.499996 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.500227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.500420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.500602 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.603384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.603429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.603443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.603461 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.603473 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.625796 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:03 crc kubenswrapper[4680]: E0126 16:06:03.626004 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:03 crc kubenswrapper[4680]: E0126 16:06:03.626169 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:06:07.626135572 +0000 UTC m=+42.787407881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.706933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.707333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.707582 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.707857 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.708122 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.818048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.818111 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.818123 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.818137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.818147 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.921028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.921090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.921101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.921117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:03 crc kubenswrapper[4680]: I0126 16:06:03.921130 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:03Z","lastTransitionTime":"2026-01-26T16:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.023632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.023683 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.023699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.023719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.023734 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.126564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.126605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.126615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.126633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.126645 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.127163 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:23:07.618112072 +0000 UTC Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.169285 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:04 crc kubenswrapper[4680]: E0126 16:06:04.169430 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.169833 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:04 crc kubenswrapper[4680]: E0126 16:06:04.170033 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.169948 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:04 crc kubenswrapper[4680]: E0126 16:06:04.170147 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.169913 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:04 crc kubenswrapper[4680]: E0126 16:06:04.170212 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.228866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.228899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.228923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.228938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.228946 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.331989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.332033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.332044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.332060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.332086 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.433959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.434007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.434019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.434037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.434048 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.535759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.535798 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.535811 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.535827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.535842 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.638476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.638511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.638521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.638537 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.638546 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.740979 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.741015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.741025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.741042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.741052 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.843276 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.843539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.843618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.843710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.843812 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.946744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.946993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.947058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.947151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:04 crc kubenswrapper[4680]: I0126 16:06:04.947222 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:04Z","lastTransitionTime":"2026-01-26T16:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.050490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.050561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.050584 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.050615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.050638 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.127280 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:43:36.684702014 +0000 UTC Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.153496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.153531 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.153544 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.153558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.153567 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.181460 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.191712 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.204950 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.214810 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.232748 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.246110 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.256333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.256382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.256394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.256412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.256424 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.256328 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.270097 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.280814 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.290381 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.299708 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.309601 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.325636 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.348685 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.358608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.358635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.358643 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.358656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.358664 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.360394 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.375642 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.393084 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.460923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.460953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.460962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.460975 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.460984 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.564189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.564257 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.564279 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.564307 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.564327 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.667062 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.667586 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.667817 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.668054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.668307 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.770453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.770680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.770745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.770808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.770877 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.873733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.873792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.873809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.873833 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.873851 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.976706 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.976984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.977138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.977250 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:05 crc kubenswrapper[4680]: I0126 16:06:05.977342 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:05Z","lastTransitionTime":"2026-01-26T16:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.079049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.079105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.079115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.079129 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.079140 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.127828 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:49:19.600085016 +0000 UTC Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.168710 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.168731 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.168733 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.168839 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:06 crc kubenswrapper[4680]: E0126 16:06:06.168981 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:06 crc kubenswrapper[4680]: E0126 16:06:06.169267 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:06 crc kubenswrapper[4680]: E0126 16:06:06.169374 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:06 crc kubenswrapper[4680]: E0126 16:06:06.169474 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.182277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.182315 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.182336 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.182350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.182359 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.285132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.285173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.285184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.285200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.285211 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.388126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.388172 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.388184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.388200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.388212 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.490458 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.490498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.490509 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.490524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.490534 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.593444 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.593471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.593479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.593492 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.593501 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.695305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.695343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.695355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.695371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.695383 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.798159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.798423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.798513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.798605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.798678 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.903184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.903242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.903258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.903315 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:06 crc kubenswrapper[4680]: I0126 16:06:06.903335 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:06Z","lastTransitionTime":"2026-01-26T16:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.006310 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.006352 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.006361 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.006376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.006388 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.108638 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.108716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.108728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.108743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.108751 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.128352 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:20:02.957152126 +0000 UTC Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.210598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.210643 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.210653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.210669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.210680 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.314262 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.314303 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.314315 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.314332 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.314343 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.416766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.416810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.416825 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.416846 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.416861 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.518489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.518737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.518893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.519087 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.519242 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.622374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.622660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.622799 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.622930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.623203 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.664147 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:07 crc kubenswrapper[4680]: E0126 16:06:07.664275 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:07 crc kubenswrapper[4680]: E0126 16:06:07.664557 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:06:15.664540674 +0000 UTC m=+50.825812943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.725744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.725774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.725782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.725794 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.725803 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.827914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.827946 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.827953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.827967 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.827976 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.930762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.931020 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.931163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.931259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:07 crc kubenswrapper[4680]: I0126 16:06:07.931352 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:07Z","lastTransitionTime":"2026-01-26T16:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.033180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.033424 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.033488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.033558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.033646 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.129042 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:04:53.776713438 +0000 UTC Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.135804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.135925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.135990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.136058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.136145 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.169542 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.169584 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.169851 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.169657 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.170120 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.169584 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.170292 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.169935 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.238775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.238808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.238818 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.238835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.238846 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.340962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.341011 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.341022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.341039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.341051 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.443566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.443817 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.443923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.444022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.444159 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.546539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.546575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.546583 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.546598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.546608 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.651965 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.652022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.652037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.652055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.652092 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.754704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.754740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.754751 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.754765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.754776 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.856363 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.856389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.856398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.856412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.856421 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.941470 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.941696 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.941797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.941870 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.941935 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.953850 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.958580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.958710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.958776 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.958845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.958904 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.970261 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.973351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.973393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.973401 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.973416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.973426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:08 crc kubenswrapper[4680]: E0126 16:06:08.984877 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.988212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.988266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.988282 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.988299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:08 crc kubenswrapper[4680]: I0126 16:06:08.988311 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:08Z","lastTransitionTime":"2026-01-26T16:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: E0126 16:06:09.000094 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.003692 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.003732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.003744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.003761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.003773 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: E0126 16:06:09.014191 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:09 crc kubenswrapper[4680]: E0126 16:06:09.014649 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.015980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.016126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.016245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.016347 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.016426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.118671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.118700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.118710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.118724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.118735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.129880 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:24:46.974156064 +0000 UTC Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.222247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.222286 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.222298 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.222314 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.222325 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.324791 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.325049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.325141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.325214 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.325274 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.427329 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.427550 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.427607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.427677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.427733 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.530017 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.530052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.530060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.530092 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.530102 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.632779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.633233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.633501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.633869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.634252 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.737477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.737533 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.737552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.737575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.737595 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.840365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.840406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.840416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.840432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.840446 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.942662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.942705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.942715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.942731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:09 crc kubenswrapper[4680]: I0126 16:06:09.942742 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:09Z","lastTransitionTime":"2026-01-26T16:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.045281 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.045534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.045596 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.045660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.045734 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.131610 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:10:17.651078781 +0000 UTC Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.147642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.147847 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.147912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.147981 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.148036 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.169305 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.169387 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.169421 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:10 crc kubenswrapper[4680]: E0126 16:06:10.169591 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:10 crc kubenswrapper[4680]: E0126 16:06:10.169744 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.169327 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:10 crc kubenswrapper[4680]: E0126 16:06:10.169858 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:10 crc kubenswrapper[4680]: E0126 16:06:10.170134 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.250216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.250272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.250290 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.250313 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.250330 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.352327 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.352555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.352639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.352723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.352813 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.455973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.456029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.456048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.456099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.456117 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.559878 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.559939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.559956 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.559981 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.559997 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.662228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.662566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.662759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.662981 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.663207 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.765651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.765679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.765687 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.765699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.765708 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.867633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.867694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.867705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.867719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.867728 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.969850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.969888 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.969898 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.969915 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:10 crc kubenswrapper[4680]: I0126 16:06:10.969925 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:10Z","lastTransitionTime":"2026-01-26T16:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.072304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.072373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.072389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.072412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.072426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.132045 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:11:31.418254903 +0000 UTC Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.174419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.174455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.174464 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.174476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.174485 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.277619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.277665 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.277680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.277699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.277711 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.381179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.381212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.381222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.381237 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.381248 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.484789 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.485211 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.485348 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.485509 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.485644 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.588643 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.588683 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.588697 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.588715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.588728 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.691162 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.691210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.691221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.691236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.691248 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.794714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.794768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.794786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.794810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.794828 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.897945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.898009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.898025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.898057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:11 crc kubenswrapper[4680]: I0126 16:06:11.898102 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:11Z","lastTransitionTime":"2026-01-26T16:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.000912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.000959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.000968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.000984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.000996 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.103836 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.103909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.103926 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.103951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.103968 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.133289 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:13:36.775788543 +0000 UTC Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.169038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.169176 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.169112 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.169113 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:12 crc kubenswrapper[4680]: E0126 16:06:12.169306 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:12 crc kubenswrapper[4680]: E0126 16:06:12.169492 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:12 crc kubenswrapper[4680]: E0126 16:06:12.169604 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:12 crc kubenswrapper[4680]: E0126 16:06:12.169933 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.207216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.207287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.207311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.207341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.207366 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.310628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.310687 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.310704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.310727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.310745 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.413948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.414024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.414046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.414126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.414156 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.516552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.516644 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.516664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.516688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.516705 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.623973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.624023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.624034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.624051 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.624082 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.727463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.727522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.727560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.727595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.727617 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.830441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.830514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.830593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.830624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.830643 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.933508 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.933596 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.933622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.933656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:12 crc kubenswrapper[4680]: I0126 16:06:12.933680 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:12Z","lastTransitionTime":"2026-01-26T16:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.038453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.038530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.038554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.038584 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.038607 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.134433 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:26:57.893328601 +0000 UTC Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.141365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.141421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.141431 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.141448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.141459 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.244397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.244439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.244449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.244466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.244478 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.346883 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.346930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.346938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.346952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.346961 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.449479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.449530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.449542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.449561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.449573 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.552607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.552641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.552652 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.552668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.552679 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.654846 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.654890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.654906 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.654929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.654944 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.757835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.757880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.757891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.757908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.757919 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.861620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.861660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.861669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.861682 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.861690 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.963911 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.963949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.963960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.963999 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:13 crc kubenswrapper[4680]: I0126 16:06:13.964009 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:13Z","lastTransitionTime":"2026-01-26T16:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.066760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.066838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.066891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.066919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.066939 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.114840 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.125993 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.135199 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:51:08.827344355 +0000 UTC Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.138527 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.159982 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.168987 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.169149 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.169126 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.169038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:14 crc kubenswrapper[4680]: E0126 16:06:14.169329 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:14 crc kubenswrapper[4680]: E0126 16:06:14.169446 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:14 crc kubenswrapper[4680]: E0126 16:06:14.169554 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:14 crc kubenswrapper[4680]: E0126 16:06:14.169698 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.170420 4680 scope.go:117] "RemoveContainer" containerID="ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.170625 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.170678 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.170701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.170729 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.170752 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.186956 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.201820 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.220361 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.249462 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.272624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.272651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.272658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.272671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.272679 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.283370 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b83f652870e69addb18e2b5f221679f57b3195b464f1f175a3638de4e84621\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:55Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:05:55.521650 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:05:55.521666 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:05:55.521710 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:05:55.521732 5895 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:05:55.521739 5895 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:05:55.521779 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:05:55.521790 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:05:55.521812 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:05:55.521836 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:05:55.521844 5895 factory.go:656] Stopping watch factory\\\\nI0126 16:05:55.521853 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:05:55.521872 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:05:55.521879 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 16:05:55.521892 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 16:05:55.521899 5895 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.300335 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.316230 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.331278 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.345420 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.356268 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.365811 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.374963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.374981 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.374989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.375003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.375011 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.380306 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.390931 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.401008 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.411659 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.432493 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.452045 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.464766 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.475138 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.476723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.476754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.476761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.476774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.476784 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.490285 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.508658 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.521639 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.539397 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/1.log" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.542166 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.542665 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.555515 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.571157 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.578822 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.578845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.578853 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.578866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.578877 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.584878 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.595343 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.604438 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.615555 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.627983 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.639026 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.652405 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.664115 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.675091 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.681541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.681595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.681611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.681628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.681639 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.692230 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.704755 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.717507 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.731921 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.741643 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.754659 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.766573 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.777196 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.783897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.783921 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.783932 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.783947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.783958 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.786802 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.804046 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.820047 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.831137 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.842609 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.862064 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.877289 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.886551 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.886594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.886602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.886619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.886630 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.887934 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.901308 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.912774 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.989001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.989035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.989046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.989079 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:14 crc kubenswrapper[4680]: I0126 16:06:14.989092 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:14Z","lastTransitionTime":"2026-01-26T16:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.090823 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.090862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.090872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.090885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.090893 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.136109 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:20:56.885449119 +0000 UTC Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.185056 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.193564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.193613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.193633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.193658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.193675 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.196647 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.205612 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.220661 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.232787 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.242697 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.252509 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.262111 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.272572 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.292468 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.297549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.297614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.297624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.297845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.297856 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.311414 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.324609 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.336255 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.353254 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.364767 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.378550 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.387378 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.398143 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.399750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.399783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.399796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.399812 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.399824 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.502361 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.502400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.502410 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.502427 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.502437 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.546237 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/2.log" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.546977 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/1.log" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.550121 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c" exitCode=1 Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.550154 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.550186 4680 scope.go:117] "RemoveContainer" containerID="ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.550750 4680 scope.go:117] "RemoveContainer" containerID="6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c" Jan 26 16:06:15 crc kubenswrapper[4680]: E0126 16:06:15.550868 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.567978 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.589248 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.604745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.604852 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.604867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.604883 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.604942 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.605205 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.622335 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.639734 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.651701 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.661561 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.671991 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.685799 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.695215 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.706717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.706747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.706755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.707025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.706985 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.707042 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.736591 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.749914 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:15 crc kubenswrapper[4680]: E0126 16:06:15.750100 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:15 crc kubenswrapper[4680]: E0126 16:06:15.750139 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:06:31.750127023 +0000 UTC m=+66.911399282 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.761613 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee0031892216001b3b1302cfa804aa039f0a94dc8155b07fd84fb12d3eb8f035\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:05:58Z\\\",\\\"message\\\":\\\"plates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 16:05:57.803317 6013 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803343 6013 services_controller.go:453] Built service openshift-etcd/etcd template LB for network=default: []services.LB{}\\\\nI0126 16:05:57.803372 6013 services_controller.go:454] Service openshift-etcd/etcd for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0126 16:05:57.803327 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:05:57Z is after 2025-08-24T17:21:41Z]\\\\nI0126 16:05:57.80\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.775530 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.790889 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.809135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.809175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.809185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.809197 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.809205 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.864687 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.875187 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.884117 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.912285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.912358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.912373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.912394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:15 crc kubenswrapper[4680]: I0126 16:06:15.912413 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:15Z","lastTransitionTime":"2026-01-26T16:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.014710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.014739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.014747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.014760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.014768 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.116880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.116930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.116944 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.116961 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.116973 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.137348 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:07:30.620347121 +0000 UTC Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.169305 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.169421 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.169497 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.169571 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.169671 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.169816 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.170043 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.170350 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.219388 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.219447 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.219459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.219479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.219497 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.321604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.321635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.321642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.321655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.321663 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.425219 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.425605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.425771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.425942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.426127 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.528931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.529001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.529019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.529043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.529060 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.557405 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/2.log" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.561885 4680 scope.go:117] "RemoveContainer" containerID="6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.562113 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.566568 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.566716 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:06:48.566697949 +0000 UTC m=+83.727970218 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.566756 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.567050 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.567134 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:48.567123991 +0000 UTC m=+83.728396260 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.578022 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.603252 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.632776 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.632819 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.632831 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.632851 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.632862 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.636176 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.656896 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.668383 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.668451 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.668477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668651 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668695 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668718 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668777 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:48.668761845 +0000 UTC m=+83.830034114 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668840 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668854 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668863 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.668888 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:48.668880518 +0000 UTC m=+83.830152787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.669049 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: E0126 16:06:16.669090 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:06:48.669083484 +0000 UTC m=+83.830355743 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.673892 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.694804 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.715976 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.735892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.735949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.735960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.735976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.735986 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.737651 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.747987 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.756704 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.770882 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.781641 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.791646 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.802606 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.813893 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.824784 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.836397 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.837700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.837730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.837741 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.837757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.837767 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.846458 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:16Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.941028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.941120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.941137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.941161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:16 crc kubenswrapper[4680]: I0126 16:06:16.941176 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:16Z","lastTransitionTime":"2026-01-26T16:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.043770 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.043828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.043838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.043854 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.043868 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.137601 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:35:10.125945549 +0000 UTC Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.146816 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.146863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.146875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.146893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.146902 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.249379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.249416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.249425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.249439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.249450 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.352130 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.352176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.352186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.352203 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.352215 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.454623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.454686 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.454704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.454728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.454749 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.557010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.557049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.557058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.557084 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.557093 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.659876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.659914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.659926 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.659942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.659954 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.762573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.762609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.762617 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.762632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.762640 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.866108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.866429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.866623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.866800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.866958 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.970151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.970220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.970236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.970260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:17 crc kubenswrapper[4680]: I0126 16:06:17.970276 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:17Z","lastTransitionTime":"2026-01-26T16:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.073385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.073453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.073469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.073491 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.073508 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.137945 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:31:29.503299761 +0000 UTC Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.169563 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.169635 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:18 crc kubenswrapper[4680]: E0126 16:06:18.169688 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.169731 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:18 crc kubenswrapper[4680]: E0126 16:06:18.169891 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.170022 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:18 crc kubenswrapper[4680]: E0126 16:06:18.170046 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:18 crc kubenswrapper[4680]: E0126 16:06:18.170246 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.175240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.175266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.175278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.175293 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.175304 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.277315 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.277353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.277375 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.277396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.277409 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.379919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.380208 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.380296 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.380394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.380485 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.483604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.483641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.483653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.483670 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.483681 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.585786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.585835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.585846 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.585862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.585875 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.689377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.689431 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.689448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.689473 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.689489 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.791845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.791883 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.791894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.791910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.791922 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.894662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.894734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.894758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.894790 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.894812 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.997911 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.998348 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.998554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.998746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:18 crc kubenswrapper[4680]: I0126 16:06:18.998885 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:18Z","lastTransitionTime":"2026-01-26T16:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.070122 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.070204 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.070226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.070280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.070297 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: E0126 16:06:19.093282 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:19Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.098104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.098143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.098205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.098225 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.098240 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: E0126 16:06:19.118890 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:19Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.124606 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.124684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.124709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.124740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.124761 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.145227 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:38:11.375590043 +0000 UTC Jan 26 16:06:19 crc kubenswrapper[4680]: E0126 16:06:19.162971 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:19Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.174980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.175053 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.175086 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.175107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.175122 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: E0126 16:06:19.205726 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:19Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.213106 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.213157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.213169 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.213185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.213196 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: E0126 16:06:19.225170 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:19Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:19 crc kubenswrapper[4680]: E0126 16:06:19.225282 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.226781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.226810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.226821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.226838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.226849 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.329081 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.329118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.329127 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.329141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.329149 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.431319 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.431355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.431366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.431383 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.431407 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.533863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.533925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.533934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.533948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.533958 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.636461 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.636544 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.636607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.636633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.636656 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.739613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.739674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.739696 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.739724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.739744 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.841706 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.841957 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.842031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.842137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.842198 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.945647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.945702 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.945719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.945738 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:19 crc kubenswrapper[4680]: I0126 16:06:19.946174 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:19Z","lastTransitionTime":"2026-01-26T16:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.048351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.048402 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.048418 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.048439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.048454 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.145534 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:52:18.445919732 +0000 UTC Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.149923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.149945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.149952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.149965 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.149973 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.168570 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:20 crc kubenswrapper[4680]: E0126 16:06:20.168652 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.168774 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:20 crc kubenswrapper[4680]: E0126 16:06:20.168825 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.169060 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:20 crc kubenswrapper[4680]: E0126 16:06:20.169820 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.169870 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:20 crc kubenswrapper[4680]: E0126 16:06:20.169922 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.251677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.251701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.251708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.251721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.251729 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.355025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.355050 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.355059 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.355097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.355111 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.457796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.457845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.457864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.457888 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.457905 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.559790 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.559839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.559848 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.559866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.559876 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.662714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.662771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.662795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.662829 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.662852 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.765820 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.765894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.765918 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.765950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:20 crc kubenswrapper[4680]: I0126 16:06:20.765974 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:20Z","lastTransitionTime":"2026-01-26T16:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.006510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.006546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.006555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.006573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.006581 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.108511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.108552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.108565 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.108583 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.108595 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.146004 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:51:09.492630188 +0000 UTC Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.210925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.210963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.210976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.210995 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.211007 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.313143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.313435 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.313574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.313711 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.313841 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.416253 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.416309 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.416318 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.416332 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.416342 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.519124 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.519476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.519700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.519962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.520200 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.623465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.623816 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.624009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.624219 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.624410 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.727701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.727755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.727779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.727808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.727827 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.830862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.830913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.830930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.830954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.830974 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.933759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.933811 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.933824 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.933848 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:21 crc kubenswrapper[4680]: I0126 16:06:21.933863 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:21Z","lastTransitionTime":"2026-01-26T16:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.036261 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.036306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.036323 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.036349 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.036369 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.138571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.138642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.138668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.138699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.138725 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.146828 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:21:30.918374621 +0000 UTC Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.169287 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.169322 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.169322 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.169305 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:22 crc kubenswrapper[4680]: E0126 16:06:22.169501 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:22 crc kubenswrapper[4680]: E0126 16:06:22.169588 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:22 crc kubenswrapper[4680]: E0126 16:06:22.169648 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:22 crc kubenswrapper[4680]: E0126 16:06:22.169711 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.241953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.242572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.242810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.243005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.243250 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.345687 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.345916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.346019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.346145 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.346240 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.448245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.448309 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.448331 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.448361 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.448383 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.552065 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.552183 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.552207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.552240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.552263 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.654721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.654793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.654814 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.654838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.654877 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.757748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.758134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.758346 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.758495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.758645 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.861878 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.862263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.862446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.862618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.862757 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.965522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.965572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.965588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.965611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:22 crc kubenswrapper[4680]: I0126 16:06:22.965627 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:22Z","lastTransitionTime":"2026-01-26T16:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.068742 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.068789 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.068805 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.068828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.068847 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.148772 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:11:36.634574109 +0000 UTC Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.171737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.171809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.171832 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.171864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.171888 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.275734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.276548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.276876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.277114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.277349 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.380875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.380964 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.380977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.380998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.381035 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.483774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.484148 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.484284 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.484422 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.484612 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.588192 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.588255 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.588273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.588301 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.588318 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.691009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.691126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.691153 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.691182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.691202 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.794339 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.794417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.794443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.794474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.794495 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.897153 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.897239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.897275 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.897309 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.897332 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.999185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.999256 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.999269 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.999287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:23 crc kubenswrapper[4680]: I0126 16:06:23.999300 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:23Z","lastTransitionTime":"2026-01-26T16:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.102005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.102037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.102046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.102060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.102092 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.149589 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:22:29.183643569 +0000 UTC Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.169004 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:24 crc kubenswrapper[4680]: E0126 16:06:24.169217 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.169352 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.169496 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.169582 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:24 crc kubenswrapper[4680]: E0126 16:06:24.169663 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:24 crc kubenswrapper[4680]: E0126 16:06:24.169793 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:24 crc kubenswrapper[4680]: E0126 16:06:24.169901 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.205171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.205498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.205633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.205798 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.205964 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.308899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.309184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.309293 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.309376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.309463 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.412515 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.412566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.412580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.412600 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.412613 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.514928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.514995 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.515013 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.515040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.515060 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.617435 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.617499 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.617522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.617552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.617574 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.719664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.719699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.719708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.719721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.719729 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.821630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.821681 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.821692 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.821707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.821716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.923602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.923843 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.923902 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.923968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:24 crc kubenswrapper[4680]: I0126 16:06:24.924037 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:24Z","lastTransitionTime":"2026-01-26T16:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.027236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.027540 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.027627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.027743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.027830 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.131314 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.131394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.131420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.131451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.131473 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.149868 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:49:08.697519214 +0000 UTC Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.193196 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.235049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.235272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.235362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.235421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.235478 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.237922 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.254722 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.269575 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.279849 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.291897 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.303813 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.316108 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.325009 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.339043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.339173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.339182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.339195 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.339204 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.339136 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.352497 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.363657 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.372315 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.383799 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.395461 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.406511 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.416168 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.426146 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.441445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.441469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.441477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.441490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.441499 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.543554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.543588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.543597 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.543610 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.543635 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.646524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.646778 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.646787 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.646801 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.646812 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.753108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.753184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.753205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.753229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.753246 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.856498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.856616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.856637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.856663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.856680 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.959002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.959036 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.959048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.959168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:25 crc kubenswrapper[4680]: I0126 16:06:25.959215 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:25Z","lastTransitionTime":"2026-01-26T16:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.062523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.062611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.062629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.062655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.062673 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.151248 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:07:14.206808518 +0000 UTC Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.165514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.165552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.165561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.165579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.165588 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.168840 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.168893 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.168894 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.169006 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:26 crc kubenswrapper[4680]: E0126 16:06:26.169014 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:26 crc kubenswrapper[4680]: E0126 16:06:26.169144 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:26 crc kubenswrapper[4680]: E0126 16:06:26.169230 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:26 crc kubenswrapper[4680]: E0126 16:06:26.169489 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.267910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.267976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.267987 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.268003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.268012 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.370477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.370526 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.370538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.370555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.370565 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.472637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.472671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.472679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.472693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.472702 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.574765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.574808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.574816 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.574834 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.574843 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.676755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.676811 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.676821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.676839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.676850 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.779691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.779768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.779783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.779810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.779826 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.881629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.881661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.881668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.881681 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.881690 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.983928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.983966 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.983976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.983992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:26 crc kubenswrapper[4680]: I0126 16:06:26.984003 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:26Z","lastTransitionTime":"2026-01-26T16:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.085974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.086013 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.086023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.086041 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.086051 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.151634 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:30:40.402823961 +0000 UTC Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.188010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.188051 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.188060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.188141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.188152 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.290335 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.290385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.290393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.290407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.290417 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.393723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.393860 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.393886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.393913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.393979 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.496492 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.496533 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.496544 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.496564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.496573 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.598378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.598441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.598455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.598470 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.598481 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.699963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.700087 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.700105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.700121 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.700132 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.802282 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.802333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.802354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.802376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.802391 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.903802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.903841 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.903850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.903864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:27 crc kubenswrapper[4680]: I0126 16:06:27.903875 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:27Z","lastTransitionTime":"2026-01-26T16:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.007135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.007263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.007329 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.007363 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.007425 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.109625 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.109678 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.109689 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.109707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.109719 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.152299 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:49:14.917174138 +0000 UTC Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.168591 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.168639 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.168726 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:28 crc kubenswrapper[4680]: E0126 16:06:28.168737 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.168609 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:28 crc kubenswrapper[4680]: E0126 16:06:28.168910 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:28 crc kubenswrapper[4680]: E0126 16:06:28.169163 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:28 crc kubenswrapper[4680]: E0126 16:06:28.170045 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.211976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.212024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.212037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.212055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.212085 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.314114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.314153 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.314163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.314178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.314192 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.416605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.416661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.416676 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.416698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.416716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.557432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.557461 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.557488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.557502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.557512 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.659520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.659559 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.659571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.659586 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:28 crc kubenswrapper[4680]: I0126 16:06:28.659600 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:28Z","lastTransitionTime":"2026-01-26T16:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.573707 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:45:13.053212326 +0000 UTC Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.573929 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.573957 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.574018 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.574258 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.574822 4680 scope.go:117] "RemoveContainer" containerID="6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c" Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.574965 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.575279 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.575307 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.575314 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.575328 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.575337 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.576210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.576238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.576250 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.576265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.576276 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.589406 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.593372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.593394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.593403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.593416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.593426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.604497 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.608410 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.608440 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.608455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.608470 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.608482 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.619529 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.623298 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.623321 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.623329 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.623342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.623354 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.635013 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.638043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.638108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.638120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.638137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.638149 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.651642 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:29 crc kubenswrapper[4680]: E0126 16:06:29.651788 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.677432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.677466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.677475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.677488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.677497 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.780015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.780136 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.780156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.780180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.780197 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.882326 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.882367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.882379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.882394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.882407 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.984589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.984623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.984632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.984647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:29 crc kubenswrapper[4680]: I0126 16:06:29.984656 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:29Z","lastTransitionTime":"2026-01-26T16:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.086793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.086836 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.086848 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.086866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.086879 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.169498 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:30 crc kubenswrapper[4680]: E0126 16:06:30.169608 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.169503 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:30 crc kubenswrapper[4680]: E0126 16:06:30.169821 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.190415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.190446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.190455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.190468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.190477 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.292588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.292649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.292668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.292695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.292713 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.394585 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.394625 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.394636 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.394651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.394663 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.496360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.496397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.496408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.496424 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.496435 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.574497 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:14:14.394301001 +0000 UTC Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.598569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.598616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.598624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.598637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.598645 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.700352 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.700383 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.700391 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.700406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.700414 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.802304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.802334 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.802344 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.802360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.802369 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.904295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.904343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.904387 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.904404 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:30 crc kubenswrapper[4680]: I0126 16:06:30.904414 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:30Z","lastTransitionTime":"2026-01-26T16:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.006238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.006276 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.006289 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.006333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.006345 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.108757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.108793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.108810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.108826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.108838 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.171446 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:31 crc kubenswrapper[4680]: E0126 16:06:31.171547 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.171838 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:31 crc kubenswrapper[4680]: E0126 16:06:31.171887 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.211186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.211232 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.211241 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.211258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.211276 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.313447 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.313489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.313498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.313512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.313523 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.415129 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.415156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.415164 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.415176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.415185 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.517879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.517901 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.517909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.517919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.517927 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.574991 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:23:26.848006757 +0000 UTC Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.620378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.620403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.620413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.620428 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.620437 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.722043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.722085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.722096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.722109 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.722117 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.801530 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:31 crc kubenswrapper[4680]: E0126 16:06:31.801719 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:31 crc kubenswrapper[4680]: E0126 16:06:31.801822 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:07:03.80179058 +0000 UTC m=+98.963062889 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.823281 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.823359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.823368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.823381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.823389 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.925603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.925631 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.925639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.925652 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:31 crc kubenswrapper[4680]: I0126 16:06:31.925660 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:31Z","lastTransitionTime":"2026-01-26T16:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.028215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.028264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.028278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.028297 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.028312 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.130207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.130233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.130241 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.130273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.130282 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.169195 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:32 crc kubenswrapper[4680]: E0126 16:06:32.169301 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.169552 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:32 crc kubenswrapper[4680]: E0126 16:06:32.169648 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.232343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.232380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.232391 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.232407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.232418 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.334376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.334406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.334415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.334429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.334437 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.436515 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.436543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.436552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.436566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.436575 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.538561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.538599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.538607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.538618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.538626 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.575279 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:59:14.843365365 +0000 UTC Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.640665 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.640706 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.640717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.640733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.640741 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.742850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.742888 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.742899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.742920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.742951 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.844941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.844982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.844992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.845009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.845019 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.946937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.946968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.946976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.946988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:32 crc kubenswrapper[4680]: I0126 16:06:32.946997 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:32Z","lastTransitionTime":"2026-01-26T16:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.049498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.049534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.049542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.049555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.049564 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.151578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.151616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.151627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.151644 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.151655 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.168920 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:33 crc kubenswrapper[4680]: E0126 16:06:33.169044 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.169411 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:33 crc kubenswrapper[4680]: E0126 16:06:33.169477 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.253654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.253695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.253705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.253719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.253728 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.355574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.355625 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.355637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.355654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.355665 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.458300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.458350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.458358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.458372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.458381 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.560979 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.561046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.561088 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.561116 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.561133 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.576131 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:58:47.877365776 +0000 UTC Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.608965 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/0.log" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.609204 4680 generic.go:334] "Generic (PLEG): container finished" podID="9ac04312-7b74-4193-9b93-b54b91bab69b" containerID="5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6" exitCode=1 Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.609303 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerDied","Data":"5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.609749 4680 scope.go:117] "RemoveContainer" containerID="5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.624309 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.640262 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.650050 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.663291 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.663325 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.663335 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.663350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.663359 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.663261 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.677310 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.696535 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.714896 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.727948 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.739161 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.749904 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.759875 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.765737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.765768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.765777 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.765792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.765802 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.769670 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.778362 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.788378 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.797207 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.807593 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.819010 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.830113 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:33Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.868555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.868588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.868599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.868617 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.868631 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.970804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.970833 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.970844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.970862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:33 crc kubenswrapper[4680]: I0126 16:06:33.970874 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:33Z","lastTransitionTime":"2026-01-26T16:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.072970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.072994 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.073002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.073018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.073029 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.169084 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.169107 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:34 crc kubenswrapper[4680]: E0126 16:06:34.169218 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:34 crc kubenswrapper[4680]: E0126 16:06:34.169296 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.174488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.174515 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.174527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.174539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.174548 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.276404 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.276428 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.276435 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.276448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.276457 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.378651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.378690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.378700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.378716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.378726 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.480494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.480542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.480556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.480572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.480584 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.577108 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:36:05.290479666 +0000 UTC Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.582830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.582864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.582877 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.582895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.582906 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.613015 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/0.log" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.613093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerStarted","Data":"baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.624270 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.635714 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.647966 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.665861 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.679095 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.685169 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.685205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.685218 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.685234 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.685262 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.688395 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.700298 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.724099 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.736670 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.745332 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.757052 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.767822 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.779295 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.787588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.787634 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.787645 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.787662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.787673 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.792870 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.803641 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.815462 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.827836 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.838511 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:34Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.890400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.890434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.890444 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.890462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.890473 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.992549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.992587 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.992599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.992615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:34 crc kubenswrapper[4680]: I0126 16:06:34.992627 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:34Z","lastTransitionTime":"2026-01-26T16:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.094849 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.094884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.094895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.094912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.094924 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.169549 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.169560 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:35 crc kubenswrapper[4680]: E0126 16:06:35.169690 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:35 crc kubenswrapper[4680]: E0126 16:06:35.169769 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.181196 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.191088 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.196872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.196899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.196908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.196922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.196932 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.198935 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.210809 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.221740 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.231605 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.240824 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.250439 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.259632 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.278796 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.292493 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.299582 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.299621 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.299632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.299649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.299664 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.302545 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.313856 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.331475 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.345773 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.355589 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.366810 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.380283 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.401857 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.401883 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.401894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.401908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.401918 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.504588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.504618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.504626 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.504641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.504653 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.578062 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:09:35.712835548 +0000 UTC Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.606716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.606760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.606770 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.606783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.606792 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.708996 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.709029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.709039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.709055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.709080 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.811181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.811248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.811264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.811660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.811714 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.914971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.915001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.915014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.915030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:35 crc kubenswrapper[4680]: I0126 16:06:35.915042 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:35Z","lastTransitionTime":"2026-01-26T16:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.017490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.017530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.017541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.017559 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.017571 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.119491 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.119518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.119527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.119540 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.119548 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.169241 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.169248 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:36 crc kubenswrapper[4680]: E0126 16:06:36.169392 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:36 crc kubenswrapper[4680]: E0126 16:06:36.169449 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.222191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.222224 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.222232 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.222247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.222256 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.324675 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.324743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.324756 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.324772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.324782 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.429251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.429342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.429352 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.429366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.429374 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.531907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.531944 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.531952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.531968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.531977 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.579111 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:29:37.524739168 +0000 UTC Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.634314 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.634350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.634364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.634380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.634392 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.736456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.736488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.736496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.736510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.736522 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.838629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.838664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.838674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.838687 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.838695 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.940561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.940593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.940603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.940616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:36 crc kubenswrapper[4680]: I0126 16:06:36.940625 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:36Z","lastTransitionTime":"2026-01-26T16:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.042743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.042775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.042787 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.042802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.042813 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.145207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.145280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.145300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.145325 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.145412 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.169648 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.169742 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:37 crc kubenswrapper[4680]: E0126 16:06:37.169750 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:37 crc kubenswrapper[4680]: E0126 16:06:37.169937 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.248003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.248128 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.248139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.248152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.248161 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.349972 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.350013 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.350022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.350038 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.350094 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.451886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.451930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.451939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.451954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.451964 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.554479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.554512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.554520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.554534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.554542 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.580164 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:14:24.046324296 +0000 UTC Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.656984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.657028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.657037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.657054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.657097 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.759430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.759455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.759463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.759477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.759485 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.862339 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.862382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.862396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.862413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.862425 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.964710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.964744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.964754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.964771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:37 crc kubenswrapper[4680]: I0126 16:06:37.964783 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:37Z","lastTransitionTime":"2026-01-26T16:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.067221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.067254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.067266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.067280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.067294 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169029 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:38 crc kubenswrapper[4680]: E0126 16:06:38.169142 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169169 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:38 crc kubenswrapper[4680]: E0126 16:06:38.169321 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169387 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.169419 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.272048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.272114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.272133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.272153 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.272167 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.374120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.374212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.374272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.374295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.374342 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.476581 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.476609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.476636 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.476649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.476661 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.579133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.579207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.579221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.579237 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.579257 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.580266 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:25:24.833018977 +0000 UTC Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.681945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.681986 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.681998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.682014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.682025 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.784542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.784570 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.784579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.784594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.784613 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.887376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.887401 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.887409 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.887421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.887429 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.990433 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.990476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.990484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.990500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:38 crc kubenswrapper[4680]: I0126 16:06:38.990509 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:38Z","lastTransitionTime":"2026-01-26T16:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.093381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.093431 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.093445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.093467 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.093482 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.168937 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.169062 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.169546 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.169719 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.195420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.195624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.195708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.195788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.195900 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.298719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.298755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.298766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.298779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.298803 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.402204 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.402262 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.402272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.402288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.402299 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.504365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.504396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.504405 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.504419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.504427 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.580372 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:01:14.846620802 +0000 UTC Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.607039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.607099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.607115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.607135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.607149 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.707601 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.707633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.707642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.707656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.707666 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.720299 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.723716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.723881 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.723948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.724025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.724122 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.736820 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.740141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.740267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.740359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.740459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.740526 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.752960 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.755946 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.755982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.755994 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.756009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.756019 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.767299 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.770541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.770579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.770592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.770611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.770623 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.781882 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:39 crc kubenswrapper[4680]: E0126 16:06:39.782008 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.783700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.783732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.783772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.783787 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.783797 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.885623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.885667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.885677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.885694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.885706 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.988970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.989017 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.989036 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.989058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:39 crc kubenswrapper[4680]: I0126 16:06:39.989108 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:39Z","lastTransitionTime":"2026-01-26T16:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.092433 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.092475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.092485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.092505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.092515 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.169212 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.169311 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:40 crc kubenswrapper[4680]: E0126 16:06:40.169391 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:40 crc kubenswrapper[4680]: E0126 16:06:40.169514 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.195229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.195291 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.195312 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.195336 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.195354 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.302274 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.302556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.302704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.302771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.302836 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.404951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.405001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.405022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.405050 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.405102 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.507532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.507569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.507578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.507592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.507600 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.581331 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:59:03.078205371 +0000 UTC Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.609959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.609984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.609993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.610007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.610016 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.713213 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.713278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.713296 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.713320 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.713336 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.815863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.815897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.815909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.815923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.815933 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.918613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.918656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.918668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.918686 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:40 crc kubenswrapper[4680]: I0126 16:06:40.918698 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:40Z","lastTransitionTime":"2026-01-26T16:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.023296 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.023346 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.023357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.023375 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.023390 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.126590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.126619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.126629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.126646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.126657 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.170381 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.170491 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:41 crc kubenswrapper[4680]: E0126 16:06:41.170585 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:41 crc kubenswrapper[4680]: E0126 16:06:41.170723 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.171890 4680 scope.go:117] "RemoveContainer" containerID="6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.230659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.230694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.230720 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.230737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.230748 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.333212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.333245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.333254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.333278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.333290 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.435843 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.435886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.435896 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.435913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.435925 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.538792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.538827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.538839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.538858 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.538870 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.585476 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:01:26.478942383 +0000 UTC Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.632586 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/2.log" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.634573 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.635264 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.640880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.640933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.640949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.640970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.640982 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.652028 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.674458 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.694587 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.712246 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.723548 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.739701 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.743114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.743155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.743166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.743182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.743194 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.754342 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.768588 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.779564 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.790501 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.804727 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.818237 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.826995 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.838819 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.845517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.845555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.845566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.845592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.845605 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.848816 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.858621 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.868258 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.878695 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:41Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.947913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.947954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.947967 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.947984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:41 crc kubenswrapper[4680]: I0126 16:06:41.947996 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:41Z","lastTransitionTime":"2026-01-26T16:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.050997 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.051288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.051416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.051548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.051654 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.154415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.154491 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.154513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.154545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.154564 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.169269 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.169290 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:42 crc kubenswrapper[4680]: E0126 16:06:42.169459 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:42 crc kubenswrapper[4680]: E0126 16:06:42.169722 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.257688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.258106 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.258342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.258531 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.258708 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.361809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.361884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.361896 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.361913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.361935 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.464733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.464763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.464771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.464784 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.464792 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.567138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.567183 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.567194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.567212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.567223 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.586547 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:38:22.920239713 +0000 UTC Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.638829 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/3.log" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.639319 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/2.log" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.641849 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" exitCode=1 Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.641881 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.641909 4680 scope.go:117] "RemoveContainer" containerID="6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.642462 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:06:42 crc kubenswrapper[4680]: E0126 16:06:42.642584 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.656622 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.669997 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.670054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.670090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.670111 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.670123 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.671293 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.684413 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.696394 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.708196 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.724404 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.741795 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.763024 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a93effd23b9f55b98fec4fe50930c8a2b3a232d6f3da91a6886a1bca9e3431c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:15Z\\\",\\\"message\\\":\\\"o:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 4.217907ms\\\\nI0126 16:06:14.947618 6227 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:14.947920 6227 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nI0126 16:06:14.947935 6227 services_controller.go:356] Processing sync for service openshift-marketplace/certified-operators for network=default\\\\nI0126 16:06:14.947956 6227 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 16:06:14.947969 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:42Z\\\",\\\"message\\\":\\\"c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006726 6606 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006786 6606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admiss\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.772532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.772573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.772596 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.772617 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.772635 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.773822 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.784935 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.794640 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.804364 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.814498 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.822768 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.833182 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.843024 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.852123 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.862464 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:42Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.874745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.874770 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.874779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.874793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.874801 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.977578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.977641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.977666 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.977696 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:42 crc kubenswrapper[4680]: I0126 16:06:42.977719 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:42Z","lastTransitionTime":"2026-01-26T16:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.081300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.081344 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.081357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.081376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.081393 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.169540 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:43 crc kubenswrapper[4680]: E0126 16:06:43.169932 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.170054 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:43 crc kubenswrapper[4680]: E0126 16:06:43.170212 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.184329 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.184377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.184394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.184416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.184432 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.287552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.287577 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.287586 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.287599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.287607 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.389866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.389894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.389903 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.389915 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.389923 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.492195 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.492228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.492241 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.492295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.492309 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.587057 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:01:30.602630003 +0000 UTC Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.595568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.595619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.595630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.595644 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.595655 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.646971 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/3.log" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.651709 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:06:43 crc kubenswrapper[4680]: E0126 16:06:43.651990 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.670721 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.683241 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.697813 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.697851 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.697863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.697880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.697891 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.702209 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.725735 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.746547 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:42Z\\\",\\\"message\\\":\\\"c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006726 6606 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006786 6606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admiss\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.761920 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.771264 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.782170 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.792026 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.800108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.800135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.800143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.800157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.800166 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.802219 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.811283 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.822910 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.835030 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.846958 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.856001 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.866762 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.876918 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.886755 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:43Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.902411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.902461 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.902472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.902490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:43 crc kubenswrapper[4680]: I0126 16:06:43.902502 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:43Z","lastTransitionTime":"2026-01-26T16:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.004810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.004848 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.004864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.004879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.004889 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.107299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.107343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.107358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.107381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.107397 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.168680 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:44 crc kubenswrapper[4680]: E0126 16:06:44.168840 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.169130 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:44 crc kubenswrapper[4680]: E0126 16:06:44.169226 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.185643 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.209689 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.209769 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.209803 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.209835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.209855 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.312795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.312850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.312869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.312934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.312951 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.415560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.415601 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.415610 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.415623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.415632 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.524885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.525186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.525273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.525364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.525529 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.589259 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:44:01.444875689 +0000 UTC Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.627459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.627493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.627501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.627513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.627521 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.730722 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.730779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.730794 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.730812 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.730825 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.834453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.834519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.834534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.834554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.834569 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.937814 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.937871 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.937885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.937909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:44 crc kubenswrapper[4680]: I0126 16:06:44.937924 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:44Z","lastTransitionTime":"2026-01-26T16:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.041371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.041414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.041427 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.041450 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.041463 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.143899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.143934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.143942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.143955 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.143963 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.168732 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.168880 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:45 crc kubenswrapper[4680]: E0126 16:06:45.169025 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:45 crc kubenswrapper[4680]: E0126 16:06:45.169181 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.187108 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.204413 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.223323 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.245737 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.245801 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.245933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.245943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.245957 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.245966 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.264187 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:42Z\\\",\\\"message\\\":\\\"c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006726 6606 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006786 6606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admiss\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.278785 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.290153 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.301866 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.315813 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.328534 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.338122 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.347438 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.347672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.347752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.347850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.347930 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.356125 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.366332 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2c6eb4-396f-4ba6-9bdb-f00fb75783d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62222bdb0996eeb7ef310cd37b4fb75c631e560a6820c6d1d9ec9d041020c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0f7bd06fb3dd9377d50b89d13d806787f06d28576db2d0d8facf987caa34f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc0f7bd06fb3dd9377d50b89d13d806787f06d28576db2d0d8facf987caa34f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.376895 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.388141 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.397646 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.409998 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.420755 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.433088 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.450378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.450414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.450425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.450444 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.450453 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.552756 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.552822 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.552839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.552864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.552880 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.590193 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:39:14.678893 +0000 UTC Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.655547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.655592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.655604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.655648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.655661 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.757835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.758046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.758054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.758084 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.758093 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.860792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.861141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.861235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.861311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.861384 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.964212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.964268 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.964277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.964292 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:45 crc kubenswrapper[4680]: I0126 16:06:45.964301 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:45Z","lastTransitionTime":"2026-01-26T16:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.068046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.068118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.068132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.068150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.068175 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.187527 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:46 crc kubenswrapper[4680]: E0126 16:06:46.188130 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.188169 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:46 crc kubenswrapper[4680]: E0126 16:06:46.188558 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.189895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.189934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.189943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.189963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.189973 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.292432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.292469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.292477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.292492 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.292513 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.395498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.395538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.395551 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.395570 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.395584 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.498402 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.498436 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.498445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.498460 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.498471 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.592128 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:49:00.099076991 +0000 UTC Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.600830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.601049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.601167 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.601268 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.601352 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.704039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.704085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.704096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.704111 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.704123 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.805619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.805648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.805658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.805673 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.805684 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.908249 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.908283 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.908296 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.908311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:46 crc kubenswrapper[4680]: I0126 16:06:46.908321 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:46Z","lastTransitionTime":"2026-01-26T16:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.010641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.010929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.011040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.011156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.011264 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.114098 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.114406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.114494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.114588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.114716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.169527 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.169561 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:47 crc kubenswrapper[4680]: E0126 16:06:47.169655 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:47 crc kubenswrapper[4680]: E0126 16:06:47.169748 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.216939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.217212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.217287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.217379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.217440 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.320658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.320705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.320718 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.320737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.320750 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.423792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.423830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.423838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.423853 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.423864 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.526665 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.526701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.526712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.526729 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.526740 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.593233 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:57:24.161354465 +0000 UTC Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.631360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.631450 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.631474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.631506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.631537 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.733976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.734020 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.734030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.734044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.734053 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.837466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.837740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.837835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.837933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.838036 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.941765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.941834 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.941847 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.941871 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:47 crc kubenswrapper[4680]: I0126 16:06:47.941888 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:47Z","lastTransitionTime":"2026-01-26T16:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.044022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.044096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.044108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.044125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.044138 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.147470 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.147602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.147621 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.147648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.147700 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.168996 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.169004 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.169270 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.169437 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.250353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.250385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.250393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.250419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.250430 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.352520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.352562 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.352690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.352709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.352719 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.455702 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.455743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.455759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.455774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.455785 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.558168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.558209 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.558220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.558236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.558244 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.593709 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:49:57.365944723 +0000 UTC Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.613319 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.613463 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.613568 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.613672 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.613630493 +0000 UTC m=+147.774902802 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.613717 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.613703955 +0000 UTC m=+147.774976224 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.660938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.660998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.661011 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.661029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.661042 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.722856 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.722933 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.722979 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723143 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723182 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723194 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723194 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723236 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.723221771 +0000 UTC m=+147.884494040 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723280 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.723255162 +0000 UTC m=+147.884527471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723197 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723330 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723352 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:48 crc kubenswrapper[4680]: E0126 16:06:48.723413 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.723397016 +0000 UTC m=+147.884669315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.764327 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.764399 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.764412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.764429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.764443 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.866715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.866755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.866766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.866785 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.866800 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.970239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.970303 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.970316 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.970337 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:48 crc kubenswrapper[4680]: I0126 16:06:48.970350 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:48Z","lastTransitionTime":"2026-01-26T16:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.073984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.074125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.074156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.074196 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.074223 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.168664 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.168721 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.169036 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.169312 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.177018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.177044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.177052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.177063 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.177085 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.279496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.279568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.279579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.279616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.279629 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.381907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.381936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.381945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.381957 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.381965 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.484373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.484405 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.484416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.484432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.484443 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.586372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.586406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.586415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.586430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.586439 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.594254 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:34:18.799919767 +0000 UTC Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.688830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.689178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.689186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.689204 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.689214 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.791535 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.791610 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.791626 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.791642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.791653 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.894259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.894305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.894320 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.894341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.894356 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.924702 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.924746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.924757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.924774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.924784 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.938109 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.941348 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.941384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.941395 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.941411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.941423 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.952321 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.956156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.956187 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.956196 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.956210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.956222 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.969248 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.972862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.973039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.973171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.973281 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.973421 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.985807 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.988693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.988720 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.988730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.988744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:49 crc kubenswrapper[4680]: I0126 16:06:49.988754 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:49Z","lastTransitionTime":"2026-01-26T16:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.998733 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:49 crc kubenswrapper[4680]: E0126 16:06:49.998843 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.000340 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.000370 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.000378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.000390 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.000399 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.101856 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.101881 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.101890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.101902 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.101910 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.168569 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.168627 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:50 crc kubenswrapper[4680]: E0126 16:06:50.168703 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:50 crc kubenswrapper[4680]: E0126 16:06:50.168867 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.204674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.204936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.205056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.205236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.205356 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.308428 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.308465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.308480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.308502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.308514 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.410995 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.411400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.411505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.411648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.411803 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.513777 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.514093 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.514173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.514248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.514311 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.595379 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:12:59.687401986 +0000 UTC Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.616108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.616295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.616358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.616455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.616513 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.718923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.718992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.719017 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.719049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.719102 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.821733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.821775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.821786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.821802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.821812 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.925908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.925954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.925968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.925986 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:50 crc kubenswrapper[4680]: I0126 16:06:50.925997 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:50Z","lastTransitionTime":"2026-01-26T16:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.028968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.029001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.029009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.029024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.029035 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.132128 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.132224 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.132247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.132305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.132326 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.168888 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.168995 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:51 crc kubenswrapper[4680]: E0126 16:06:51.169217 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:51 crc kubenswrapper[4680]: E0126 16:06:51.169550 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.267017 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.267084 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.267096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.267113 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.267128 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.369974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.370112 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.370140 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.370173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.370198 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.473475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.473559 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.473578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.473603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.473621 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.575988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.576055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.576098 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.576124 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.576142 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.595645 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:52:29.91569239 +0000 UTC Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.677907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.677947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.677962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.677981 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.678000 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.781264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.781342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.781365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.781400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.781426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.884959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.885039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.885059 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.885124 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.885146 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.988096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.988173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.988196 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.988229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:51 crc kubenswrapper[4680]: I0126 16:06:51.988253 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:51Z","lastTransitionTime":"2026-01-26T16:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.090932 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.090973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.090983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.091001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.091013 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.168681 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.168781 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:52 crc kubenswrapper[4680]: E0126 16:06:52.168822 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:52 crc kubenswrapper[4680]: E0126 16:06:52.169168 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.197699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.197737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.197748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.197763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.197772 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.300573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.300640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.300662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.300685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.300702 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.402490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.402529 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.402538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.402552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.402560 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.505138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.505372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.505434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.505499 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.505603 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.596715 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:06:08.742017918 +0000 UTC Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.607885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.608178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.608371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.608552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.608686 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.711966 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.712090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.712108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.712130 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.712145 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.814909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.814969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.814985 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.815010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.815027 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.917560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.917589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.917622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.917638 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:52 crc kubenswrapper[4680]: I0126 16:06:52.917647 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:52Z","lastTransitionTime":"2026-01-26T16:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.020107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.020140 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.020148 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.020161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.020171 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.123151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.123204 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.123220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.123242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.123258 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.169635 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.169717 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:53 crc kubenswrapper[4680]: E0126 16:06:53.169772 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:53 crc kubenswrapper[4680]: E0126 16:06:53.169851 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.225490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.225765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.225928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.226139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.226292 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.328382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.328432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.328449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.328472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.328488 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.430273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.430308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.430340 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.430358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.430368 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.532796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.532864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.532874 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.532887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.532897 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.597842 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:04:11.101849306 +0000 UTC Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.635983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.636025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.636036 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.636051 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.636062 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.738657 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.738707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.738726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.738752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.738768 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.840970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.841034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.841052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.841104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.841123 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.943881 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.943908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.943917 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.943931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:53 crc kubenswrapper[4680]: I0126 16:06:53.943940 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:53Z","lastTransitionTime":"2026-01-26T16:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.045989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.046040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.046055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.046104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.046131 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.148517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.148577 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.148598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.148628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.148650 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.168688 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.168718 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:54 crc kubenswrapper[4680]: E0126 16:06:54.168811 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:54 crc kubenswrapper[4680]: E0126 16:06:54.168969 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.251471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.251538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.251561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.251590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.251611 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.353893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.354168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.354247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.354324 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.354396 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.458356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.458463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.458486 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.458515 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.458537 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.561969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.562001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.562010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.562025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.562035 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.598748 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:48:45.332546258 +0000 UTC Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.668266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.668332 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.668371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.668393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.668452 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.771451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.771493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.771502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.771520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.771533 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.874615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.874701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.874713 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.874728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.874737 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.976637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.976692 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.976705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.976721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:54 crc kubenswrapper[4680]: I0126 16:06:54.976998 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:54Z","lastTransitionTime":"2026-01-26T16:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.080664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.080703 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.080714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.080730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.080742 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.169000 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.169000 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:55 crc kubenswrapper[4680]: E0126 16:06:55.169298 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:55 crc kubenswrapper[4680]: E0126 16:06:55.169396 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.183725 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ee455-dd44-4d7c-82f9-5f99ce11fb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8504331d99e9b18173075231a8336f221042a79bb77c7b0da5300c8f213db990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3cc6c93babf639da7ff7e079ae917626d41f7adacd287af874307986f4932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b3799054ecc5379ccff56f30a7aad86cb234f78ce1f7c3d8aed64b2fb6817b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d3d85898fd3afc232b8598216ede19cd602d917209b3c72130d06b7d7aa5da1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.186146 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.186289 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.186407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.186446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.186471 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.195111 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2c6eb4-396f-4ba6-9bdb-f00fb75783d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62222bdb0996eeb7ef310cd37b4fb75c631e560a6820c6d1d9ec9d041020c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0f7bd06fb3dd9377d50b89d13d806787f06d28576db2d0d8facf987caa34f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc0f7bd06fb3dd9377d50b89d13d806787f06d28576db2d0d8facf987caa34f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.213873 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8ef5cdda941910ac63bcabb601b0655cffc72ec199983cc6c25b037b593f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.230701 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.243528 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dt95s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04527fbd-5b7b-40c2-b752-616f569e966a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4dd153fc07658edecaa31f0842a4c22ea0fcff6733fcec1217974dffa7d6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm8qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dt95s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.260211 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqgn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac04312-7b74-4193-9b93-b54b91bab69b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:32Z\\\",\\\"message\\\":\\\"2026-01-26T16:05:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3\\\\n2026-01-26T16:05:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21042497-8d39-40cd-acda-fc551fef76a3 to /host/opt/cni/bin/\\\\n2026-01-26T16:05:47Z [verbose] multus-daemon started\\\\n2026-01-26T16:05:47Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:06:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hh5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqgn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.274281 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9dc4863-2cc9-49db-9d40-2b1d04bddea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385b2baee7601ea6cecbfd90be524499cd49886b285593c0755472e1ca523073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c411e9a6d869e02a3b7f6125bac0eb806eac9dbe4aea37a46bf6daf4a24002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gfhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rpcvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.290744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.290789 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.290839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.290857 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.290873 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.291975 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.308262 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://330e4b79d6e6ef8568ceed5d481565f8b0529a4255d8fc8313fa286b67268f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.323680 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cbae131-7d55-4573-b849-5a223c64ffa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://318af9a6a12cbcb340a65bb9ca6154ccadec05b489fb18c9992e2076dde74dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t25v8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qr4fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.337196 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7c55c8-f998-4a91-ad35-9d4eb390c817\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab751c85438580d4e6e40ea0c286d96bb53acea780c35705e2fb9b9a35fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c10e2766057ab60417de20da04c39a02831998267e07a9e9c3a857d220294ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb7c4a0d97664aeedece007f48db185e1ee511f927907cf9428533dbcbd0a525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.367710 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600553f1-b2ab-417a-8b73-e70d4848ee3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a82c714f54a86f11c662a7d74290feb12f7b95bede5d3b93b4eb4602214814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e629b51d998cd5f1539f8f42484939a0793248310993d90c6724cffe14718189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fcb5569a12cf9c82aef9d84a3a3012631f88d34ee1bfd9862d97a4d2dc4f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b73dae8f813843f753cae36b51e4c110b76029aaf0e887a8f6e4b7cc3b4600b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bae8b24633ec1f41c8edbeb8e3770751f5d8ffcc2ff0acceb773b8157f5fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e0b0e10effcdb96115b74dacef4b36e776b22822fe2178b8560013d09978c7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94ba5b816b8c41ef72da79d9c1d72ab217c28398e4a18bc39cce9e96daf4881d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46b5aacff1332bfa064387a2b51e9ec694360b1d91a9e0ef6b0fb8d6657c062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.387548 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8b202a9-2dd7-4e9d-a072-c51433d3596f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:06:42Z\\\",\\\"message\\\":\\\"c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006726 6606 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:06:42.006786 6606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admiss\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.392990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.393033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.393044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.393062 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.393110 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.402551 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mp72c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d42ee6-ef5f-4c64-b5ff-bb13c0dbbbb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://771e42ff3c0b130c890bcffa311f03cda9acbccefc957202e93173051e0d5618\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176358a34e55bbbd95f1227e4bc09771baf300e2b338fce33c2702e64afcd96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba154caa4b65eb430594397cfacbb54dc0bd3b6c2fe262b2137256f80f21df94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43b9f14f19619388824b2c7f3c17ebf39ba7902eee44b99b7de6c88dc4d9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1db3f5034889b2f10d48d3d6dad4dfd515917cf59a9c61b6f7b6eddc0844316\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c97eb915df11efa97135b4bf35f25aad43c5526ae8abe6af92f37f4bceb5f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32d69f29f8f75b0c5ae9d89295292a1e0503526be8c0d845574a4de40335d732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2zzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mp72c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.413005 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40816c76-44c8-4161-84f3-b1693d48aeaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcdct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fbl6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.424985 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affa4a5-f8e6-40ca-bf8f-f022bc800dc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:05:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.435465 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.446133 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e052f96190974f46877a74eedf19171d2d1185ba83bafa5b1a79a4b63ba43ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b600f5183b61c32a1b8bd90761d55d5d2fe49d6b499b86ead218d3c3658fd5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.456134 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8ftvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5bd0b77-69ce-4f27-a3cb-1d55d7942f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90df463d0713e4cbb7aa514410fca2e1a8189c347124708daa1436798cc04fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnlz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:05:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8ftvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:06:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.496134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.496177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.496188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.496205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.496216 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.598599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.598642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.598680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.598701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.598716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.599664 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:26:51.211073143 +0000 UTC Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.700740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.700774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.700797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.700833 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.700848 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.804503 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.804574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.804591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.804615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.804632 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.907723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.907772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.907784 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.907802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:55 crc kubenswrapper[4680]: I0126 16:06:55.907814 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:55Z","lastTransitionTime":"2026-01-26T16:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.010835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.010888 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.010903 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.010925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.010940 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.113547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.113605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.113622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.113645 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.113662 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.169204 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.169247 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:56 crc kubenswrapper[4680]: E0126 16:06:56.169474 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:56 crc kubenswrapper[4680]: E0126 16:06:56.169937 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.170613 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:06:56 crc kubenswrapper[4680]: E0126 16:06:56.171112 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.216589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.216643 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.216664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.216690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.216709 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.452767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.452805 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.452819 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.452838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.452849 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.555678 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.555707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.555716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.555728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.555738 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.600519 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:32:34.498103191 +0000 UTC Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.657911 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.657968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.657985 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.658011 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.658027 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.760321 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.760679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.760829 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.761008 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.761241 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.864319 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.864371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.864390 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.864408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.864729 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.967267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.967321 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.967333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.967349 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:56 crc kubenswrapper[4680]: I0126 16:06:56.967361 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:56Z","lastTransitionTime":"2026-01-26T16:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.070377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.070449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.070472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.070502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.070523 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.169669 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:57 crc kubenswrapper[4680]: E0126 16:06:57.170164 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.169679 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:57 crc kubenswrapper[4680]: E0126 16:06:57.170532 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.172476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.172511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.172526 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.172543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.172557 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.274912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.275303 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.275433 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.275573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.275716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.378616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.378650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.378660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.378677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.378688 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.480668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.480980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.481088 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.481186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.481284 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.584899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.584948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.584964 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.584991 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.585007 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.650013 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:19:54.546263905 +0000 UTC Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.688138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.688379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.688477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.688550 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.688619 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.790992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.791040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.791052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.791095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.791109 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.894115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.894158 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.894170 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.894188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.894201 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.997029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.997121 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.997133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.997152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:57 crc kubenswrapper[4680]: I0126 16:06:57.997165 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:57Z","lastTransitionTime":"2026-01-26T16:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.099312 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.099349 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.099357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.099371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.099379 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.168663 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.168744 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:06:58 crc kubenswrapper[4680]: E0126 16:06:58.168814 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:06:58 crc kubenswrapper[4680]: E0126 16:06:58.168885 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.201921 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.201950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.201958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.201970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.201978 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.304202 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.304260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.304278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.304302 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.304321 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.406541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.406591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.406603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.406618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.406627 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.509141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.509166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.509173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.509188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.509196 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.611844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.611924 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.611948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.611979 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.612000 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.651186 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:12:06.817937379 +0000 UTC Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.714648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.714688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.714705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.714722 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.714733 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.816889 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.816928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.816943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.816976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.816988 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.919875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.919977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.920003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.920051 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:58 crc kubenswrapper[4680]: I0126 16:06:58.920527 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:58Z","lastTransitionTime":"2026-01-26T16:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.023127 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.023174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.023184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.023203 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.023214 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.125648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.125719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.125736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.125761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.125778 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.169136 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.169265 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:06:59 crc kubenswrapper[4680]: E0126 16:06:59.169881 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:06:59 crc kubenswrapper[4680]: E0126 16:06:59.170060 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.228362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.228439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.228453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.228476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.228488 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.332172 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.332233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.332252 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.332275 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.332292 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.435642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.435812 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.435850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.435885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.435909 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.539527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.539580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.539597 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.539620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.539638 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.642423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.642505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.642523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.642549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.642570 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.651434 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:12:49.108795026 +0000 UTC Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.745950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.746001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.746018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.746042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.746059 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.849498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.849607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.849631 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.849698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.849721 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.952528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.952593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.952648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.952670 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:06:59 crc kubenswrapper[4680]: I0126 16:06:59.952687 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:06:59Z","lastTransitionTime":"2026-01-26T16:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.055904 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.055985 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.056019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.056047 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.056097 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.159134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.159212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.159235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.159266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.159289 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.169416 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.169473 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.169729 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.169848 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.263299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.263357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.263380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.263409 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.263446 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.363921 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.363964 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.363976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.363992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.364004 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.379256 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.382676 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.382826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.382839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.382856 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.382867 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.397231 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.402460 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.402524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.402537 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.402553 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.402564 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.416594 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.419953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.419974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.419982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.419996 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.420006 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.431363 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.434714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.434739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.434747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.434759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.434767 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.447018 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c9179394-fa64-4ce2-b2e0-fe9933369765\\\",\\\"systemUUID\\\":\\\"6bbe44ff-394c-4d30-89b4-d488d80b2762\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:07:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:07:00 crc kubenswrapper[4680]: E0126 16:07:00.447145 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.448496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.448527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.448538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.448554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.448565 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.550984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.551015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.551022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.551034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.551042 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.652226 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:54:19.447424203 +0000 UTC Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.654570 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.654628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.654649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.654679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.654702 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.757179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.757226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.757244 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.757265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.757281 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.860599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.860658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.860677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.860709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.860736 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.962806 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.962849 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.962861 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.962877 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:00 crc kubenswrapper[4680]: I0126 16:07:00.962888 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:00Z","lastTransitionTime":"2026-01-26T16:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.065527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.065573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.065585 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.065604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.065618 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.168092 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.168136 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.168147 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.168164 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.168176 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.168999 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.169019 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:01 crc kubenswrapper[4680]: E0126 16:07:01.169180 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:01 crc kubenswrapper[4680]: E0126 16:07:01.169416 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.270756 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.270795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.270803 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.270818 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.270827 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.373635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.373704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.373793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.373827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.373850 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.475822 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.475876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.475895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.475909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.475917 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.579736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.580168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.580370 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.580523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.580639 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.652484 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:10:19.588463226 +0000 UTC Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.683247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.683300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.683319 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.683341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.683358 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.786142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.786279 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.786302 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.786329 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.786347 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.888457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.888486 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.888494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.888506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.888515 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.992521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.992592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.992609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.992634 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:01 crc kubenswrapper[4680]: I0126 16:07:01.992652 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:01Z","lastTransitionTime":"2026-01-26T16:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.095951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.096025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.096039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.096057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.096092 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.169676 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:02 crc kubenswrapper[4680]: E0126 16:07:02.169953 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.170460 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:02 crc kubenswrapper[4680]: E0126 16:07:02.170691 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.199113 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.199158 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.199168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.199184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.199195 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.302062 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.302160 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.302177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.302202 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.302220 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.405152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.405214 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.405240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.405270 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.405291 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.507887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.507936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.507956 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.508150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.508172 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.610327 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.610364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.610374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.610389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.610399 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.652811 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:28:32.066239543 +0000 UTC Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.712707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.712747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.712761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.712776 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.712786 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.816290 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.816358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.816380 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.816413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.816437 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.920515 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.920591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.920605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.920628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:02 crc kubenswrapper[4680]: I0126 16:07:02.920645 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:02Z","lastTransitionTime":"2026-01-26T16:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.023752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.023821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.023833 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.023858 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.023870 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.126697 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.126735 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.126747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.126764 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.126773 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.169388 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.169428 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:03 crc kubenswrapper[4680]: E0126 16:07:03.169551 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:03 crc kubenswrapper[4680]: E0126 16:07:03.169890 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.228892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.228929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.228938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.228957 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.228970 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.332248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.332315 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.332333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.332359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.332377 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.435629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.435672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.435685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.435703 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.435718 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.539125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.539197 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.539216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.539251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.539285 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.642117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.642163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.642175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.642191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.642205 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.653308 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:18:15.11277013 +0000 UTC Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.745051 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.745159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.745186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.745221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.745245 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.841806 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:03 crc kubenswrapper[4680]: E0126 16:07:03.842212 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:07:03 crc kubenswrapper[4680]: E0126 16:07:03.842352 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs podName:40816c76-44c8-4161-84f3-b1693d48aeaa nodeName:}" failed. No retries permitted until 2026-01-26 16:08:07.842315057 +0000 UTC m=+163.003587366 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs") pod "network-metrics-daemon-fbl6p" (UID: "40816c76-44c8-4161-84f3-b1693d48aeaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.848271 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.848325 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.848342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.848367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.848382 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.952484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.952538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.952557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.952584 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:03 crc kubenswrapper[4680]: I0126 16:07:03.952600 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:03Z","lastTransitionTime":"2026-01-26T16:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.055413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.055454 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.055464 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.055482 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.055493 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.158350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.158408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.158421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.158450 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.158469 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.168611 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.168668 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:04 crc kubenswrapper[4680]: E0126 16:07:04.168820 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:04 crc kubenswrapper[4680]: E0126 16:07:04.168943 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.262469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.262635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.262671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.262707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.262731 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.366392 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.366453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.366468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.366490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.366506 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.469887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.470179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.470258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.470339 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.470403 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.573712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.573762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.573771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.573788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.573800 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.653821 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:37:23.15174139 +0000 UTC Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.677379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.677490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.677511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.677548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.677568 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.780669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.780909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.780992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.781085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.781200 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.883406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.883652 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.883729 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.883815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.883889 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.987052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.987186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.987211 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.987245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:04 crc kubenswrapper[4680]: I0126 16:07:04.987267 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:04Z","lastTransitionTime":"2026-01-26T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.091266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.091353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.091375 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.091406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.091428 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.169856 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:05 crc kubenswrapper[4680]: E0126 16:07:05.171629 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.170476 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:05 crc kubenswrapper[4680]: E0126 16:07:05.172144 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.194001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.194110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.194157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.194189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.194215 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.254229 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mp72c" podStartSLOduration=80.254209797 podStartE2EDuration="1m20.254209797s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.253610069 +0000 UTC m=+100.414882348" watchObservedRunningTime="2026-01-26 16:07:05.254209797 +0000 UTC m=+100.415482076" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.279138 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.27912356 podStartE2EDuration="1m15.27912356s" podCreationTimestamp="2026-01-26 16:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.278516283 +0000 UTC m=+100.439788542" watchObservedRunningTime="2026-01-26 16:07:05.27912356 +0000 UTC m=+100.440395829" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.296385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.296422 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.296432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.296446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.296456 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.304687 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=81.304667971 podStartE2EDuration="1m21.304667971s" podCreationTimestamp="2026-01-26 16:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.304399014 +0000 UTC m=+100.465671283" watchObservedRunningTime="2026-01-26 16:07:05.304667971 +0000 UTC m=+100.465940240" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.329576 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8ftvt" podStartSLOduration=80.329543704 podStartE2EDuration="1m20.329543704s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.329152003 +0000 UTC m=+100.490424282" watchObservedRunningTime="2026-01-26 16:07:05.329543704 +0000 UTC m=+100.490815973" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.345345 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.345325276 podStartE2EDuration="1m22.345325276s" podCreationTimestamp="2026-01-26 16:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.343475503 +0000 UTC m=+100.504747772" watchObservedRunningTime="2026-01-26 16:07:05.345325276 +0000 UTC m=+100.506597545" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.398578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.398620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.398632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.398648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.398669 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.409498 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dt95s" podStartSLOduration=81.409485983 podStartE2EDuration="1m21.409485983s" podCreationTimestamp="2026-01-26 16:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.408897176 +0000 UTC m=+100.570169455" watchObservedRunningTime="2026-01-26 16:07:05.409485983 +0000 UTC m=+100.570758252" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.427562 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lqgn2" podStartSLOduration=80.42755002 podStartE2EDuration="1m20.42755002s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.427103077 +0000 UTC m=+100.588375346" watchObservedRunningTime="2026-01-26 16:07:05.42755002 +0000 UTC m=+100.588822289" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.438384 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rpcvf" podStartSLOduration=79.43836967 podStartE2EDuration="1m19.43836967s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.437144605 +0000 UTC m=+100.598416884" watchObservedRunningTime="2026-01-26 16:07:05.43836967 +0000 UTC m=+100.599641939" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.458627 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.45861004 podStartE2EDuration="51.45861004s" podCreationTimestamp="2026-01-26 16:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.450050075 +0000 UTC m=+100.611322344" watchObservedRunningTime="2026-01-26 16:07:05.45861004 +0000 UTC m=+100.619882309" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.468911 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.468896674 podStartE2EDuration="21.468896674s" podCreationTimestamp="2026-01-26 16:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.459595218 +0000 UTC m=+100.620867477" watchObservedRunningTime="2026-01-26 16:07:05.468896674 +0000 UTC m=+100.630168933" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.482205 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podStartSLOduration=80.482193165 podStartE2EDuration="1m20.482193165s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:05.469501162 +0000 UTC m=+100.630773431" watchObservedRunningTime="2026-01-26 16:07:05.482193165 +0000 UTC m=+100.643465434" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.500889 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.500916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.500924 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.500936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.500944 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.603226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.603295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.603312 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.603339 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.603358 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.655438 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:51:36.150353023 +0000 UTC Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.705999 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.706110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.706134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.706166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.706188 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.807909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.807953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.807964 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.807983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.807996 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.910033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.910083 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.910095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.910109 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:05 crc kubenswrapper[4680]: I0126 16:07:05.910118 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:05Z","lastTransitionTime":"2026-01-26T16:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.012139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.012175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.012182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.012199 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.012208 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.114420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.114477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.114493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.114513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.114529 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.169282 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.169294 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:06 crc kubenswrapper[4680]: E0126 16:07:06.169475 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:06 crc kubenswrapper[4680]: E0126 16:07:06.169564 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.216938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.216980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.216992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.217010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.217022 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.319520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.319576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.319590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.319608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.319622 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.422822 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.422869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.422887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.422910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.422925 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.524626 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.524663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.524672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.524685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.524696 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.626755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.626800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.626808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.626821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.626830 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.655941 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:54:06.194514562 +0000 UTC Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.729185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.729215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.729224 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.729238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.729247 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.832031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.832101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.832116 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.832134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.832146 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.934033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.934093 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.934112 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.934132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:06 crc kubenswrapper[4680]: I0126 16:07:06.934144 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:06Z","lastTransitionTime":"2026-01-26T16:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.036285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.036318 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.036326 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.036337 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.036346 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.138477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.138508 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.138516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.138528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.138536 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.169001 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.169011 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:07 crc kubenswrapper[4680]: E0126 16:07:07.169149 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:07 crc kubenswrapper[4680]: E0126 16:07:07.169374 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.241186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.241243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.241262 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.241280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.241292 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.343391 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.343429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.343439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.343455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.343465 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.445299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.445338 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.445348 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.445364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.445374 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.547528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.547593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.547616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.547660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.547687 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.649701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.649745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.649754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.649768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.649778 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.656837 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:18:09.523056133 +0000 UTC Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.752640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.752695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.752714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.752736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.752752 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.855710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.855757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.855772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.855796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.855812 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.958859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.958910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.958920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.958938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:07 crc kubenswrapper[4680]: I0126 16:07:07.958952 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:07Z","lastTransitionTime":"2026-01-26T16:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.061487 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.061520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.061532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.061549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.061561 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.164428 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.164480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.164495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.164512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.164527 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.168520 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.168621 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:08 crc kubenswrapper[4680]: E0126 16:07:08.168760 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:08 crc kubenswrapper[4680]: E0126 16:07:08.169060 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.268441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.268500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.268518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.268546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.268573 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.370781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.370825 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.370849 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.370869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.370885 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.473969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.474033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.474043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.474061 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.474106 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.576469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.576528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.576580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.576603 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.576621 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.657147 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:53:18.691698741 +0000 UTC Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.678725 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.678781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.678793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.678808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.678820 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.781215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.781273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.781285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.781304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.781318 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.883171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.883218 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.883229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.883243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.883252 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.985096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.985135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.985143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.985173 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:08 crc kubenswrapper[4680]: I0126 16:07:08.985183 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:08Z","lastTransitionTime":"2026-01-26T16:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.086849 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.086891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.086901 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.086916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.086931 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.169545 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.169593 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:09 crc kubenswrapper[4680]: E0126 16:07:09.169705 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:09 crc kubenswrapper[4680]: E0126 16:07:09.169817 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.170622 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:07:09 crc kubenswrapper[4680]: E0126 16:07:09.170819 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j2vl_openshift-ovn-kubernetes(f8b202a9-2dd7-4e9d-a072-c51433d3596f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.189534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.189576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.189586 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.189601 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.189611 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.292234 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.292286 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.292300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.292319 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.292333 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.393859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.393893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.393903 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.393916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.393943 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.496338 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.496416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.496440 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.496469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.496490 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.597935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.597970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.597978 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.597993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.598002 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.657687 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:05:50.020781426 +0000 UTC Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.699661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.699704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.699713 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.699726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.699735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.802552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.802599 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.802612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.802629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.802638 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.905713 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.905766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.905779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.905797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:09 crc kubenswrapper[4680]: I0126 16:07:09.905814 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:09Z","lastTransitionTime":"2026-01-26T16:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.007983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.008033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.008044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.008060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.008088 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.110847 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.110884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.110893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.110907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.110916 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.169141 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.169154 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:10 crc kubenswrapper[4680]: E0126 16:07:10.169291 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:10 crc kubenswrapper[4680]: E0126 16:07:10.169380 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.213572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.213632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.213651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.213675 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.213692 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.316129 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.316174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.316186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.316202 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.316212 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.418913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.419030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.419055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.419125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.419150 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.521459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.521496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.521524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.521557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.521569 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.623591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.623628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.623639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.623654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.623664 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.658765 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:34:26.501808144 +0000 UTC Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.678122 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.678161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.678172 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.678188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.678197 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:07:10Z","lastTransitionTime":"2026-01-26T16:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.718791 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd"] Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.719183 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.720793 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.722228 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.722374 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.724995 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.808423 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23da4f35-1033-4b5a-8075-87efee55b9a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.808461 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23da4f35-1033-4b5a-8075-87efee55b9a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.808522 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/23da4f35-1033-4b5a-8075-87efee55b9a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.808545 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23da4f35-1033-4b5a-8075-87efee55b9a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.808625 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/23da4f35-1033-4b5a-8075-87efee55b9a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909677 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23da4f35-1033-4b5a-8075-87efee55b9a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909712 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23da4f35-1033-4b5a-8075-87efee55b9a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909759 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/23da4f35-1033-4b5a-8075-87efee55b9a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909776 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23da4f35-1033-4b5a-8075-87efee55b9a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909797 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/23da4f35-1033-4b5a-8075-87efee55b9a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/23da4f35-1033-4b5a-8075-87efee55b9a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.909942 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/23da4f35-1033-4b5a-8075-87efee55b9a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.910637 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23da4f35-1033-4b5a-8075-87efee55b9a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.921834 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23da4f35-1033-4b5a-8075-87efee55b9a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:10 crc kubenswrapper[4680]: I0126 16:07:10.927822 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23da4f35-1033-4b5a-8075-87efee55b9a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8q4vd\" (UID: \"23da4f35-1033-4b5a-8075-87efee55b9a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.037746 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.168851 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:11 crc kubenswrapper[4680]: E0126 16:07:11.169293 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.168940 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:11 crc kubenswrapper[4680]: E0126 16:07:11.169362 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.659894 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:19:47.141845164 +0000 UTC Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.659936 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.666373 4680 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.741851 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" event={"ID":"23da4f35-1033-4b5a-8075-87efee55b9a4","Type":"ContainerStarted","Data":"25e3d9e37a6bf89ddf2d592c0ae761ec6a6bfbc9e6f5a67c0b70dedf5b05e4a6"} Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.741904 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" event={"ID":"23da4f35-1033-4b5a-8075-87efee55b9a4","Type":"ContainerStarted","Data":"1e11bcb6bad23d692419a38b6eb73dd1fa7bf699d23bb2a708152f0e7276071b"} Jan 26 16:07:11 crc kubenswrapper[4680]: I0126 16:07:11.755133 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8q4vd" podStartSLOduration=86.755112668 podStartE2EDuration="1m26.755112668s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:11.754361547 +0000 UTC m=+106.915633826" watchObservedRunningTime="2026-01-26 16:07:11.755112668 +0000 UTC m=+106.916384937" Jan 26 16:07:12 crc kubenswrapper[4680]: I0126 16:07:12.168921 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:12 crc kubenswrapper[4680]: I0126 16:07:12.169028 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:12 crc kubenswrapper[4680]: E0126 16:07:12.169485 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:12 crc kubenswrapper[4680]: E0126 16:07:12.169697 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:13 crc kubenswrapper[4680]: I0126 16:07:13.169255 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:13 crc kubenswrapper[4680]: E0126 16:07:13.169790 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:13 crc kubenswrapper[4680]: I0126 16:07:13.169396 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:13 crc kubenswrapper[4680]: E0126 16:07:13.170004 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:14 crc kubenswrapper[4680]: I0126 16:07:14.169557 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:14 crc kubenswrapper[4680]: E0126 16:07:14.169675 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:14 crc kubenswrapper[4680]: I0126 16:07:14.170225 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:14 crc kubenswrapper[4680]: E0126 16:07:14.170312 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:15 crc kubenswrapper[4680]: I0126 16:07:15.169490 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:15 crc kubenswrapper[4680]: I0126 16:07:15.169954 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:15 crc kubenswrapper[4680]: E0126 16:07:15.178994 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:15 crc kubenswrapper[4680]: E0126 16:07:15.179499 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:16 crc kubenswrapper[4680]: I0126 16:07:16.168856 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:16 crc kubenswrapper[4680]: I0126 16:07:16.168945 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:16 crc kubenswrapper[4680]: E0126 16:07:16.168990 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:16 crc kubenswrapper[4680]: E0126 16:07:16.169204 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:17 crc kubenswrapper[4680]: I0126 16:07:17.169132 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:17 crc kubenswrapper[4680]: E0126 16:07:17.169270 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:17 crc kubenswrapper[4680]: I0126 16:07:17.169132 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:17 crc kubenswrapper[4680]: E0126 16:07:17.169469 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:18 crc kubenswrapper[4680]: I0126 16:07:18.168753 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:18 crc kubenswrapper[4680]: I0126 16:07:18.168837 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:18 crc kubenswrapper[4680]: E0126 16:07:18.168895 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:18 crc kubenswrapper[4680]: E0126 16:07:18.169040 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.169399 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.169549 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:19 crc kubenswrapper[4680]: E0126 16:07:19.169637 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:19 crc kubenswrapper[4680]: E0126 16:07:19.169706 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.764315 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/1.log" Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.764713 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/0.log" Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.764758 4680 generic.go:334] "Generic (PLEG): container finished" podID="9ac04312-7b74-4193-9b93-b54b91bab69b" containerID="baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81" exitCode=1 Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.764785 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerDied","Data":"baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81"} Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.764819 4680 scope.go:117] "RemoveContainer" containerID="5b9f0027c18b4bf9cf470958882a0a4dc1401e5c0321686111998de4b5d1bcf6" Jan 26 16:07:19 crc kubenswrapper[4680]: I0126 16:07:19.765341 4680 scope.go:117] "RemoveContainer" containerID="baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81" Jan 26 16:07:19 crc kubenswrapper[4680]: E0126 16:07:19.765587 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lqgn2_openshift-multus(9ac04312-7b74-4193-9b93-b54b91bab69b)\"" pod="openshift-multus/multus-lqgn2" podUID="9ac04312-7b74-4193-9b93-b54b91bab69b" Jan 26 16:07:20 crc kubenswrapper[4680]: I0126 16:07:20.169527 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:20 crc kubenswrapper[4680]: I0126 16:07:20.169573 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:20 crc kubenswrapper[4680]: E0126 16:07:20.169694 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:20 crc kubenswrapper[4680]: E0126 16:07:20.169744 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:20 crc kubenswrapper[4680]: I0126 16:07:20.770052 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/1.log" Jan 26 16:07:21 crc kubenswrapper[4680]: I0126 16:07:21.169171 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:21 crc kubenswrapper[4680]: I0126 16:07:21.169234 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:21 crc kubenswrapper[4680]: E0126 16:07:21.169340 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:21 crc kubenswrapper[4680]: E0126 16:07:21.169453 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:22 crc kubenswrapper[4680]: I0126 16:07:22.168912 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:22 crc kubenswrapper[4680]: I0126 16:07:22.168911 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:22 crc kubenswrapper[4680]: E0126 16:07:22.169183 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:22 crc kubenswrapper[4680]: E0126 16:07:22.169254 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.169287 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:23 crc kubenswrapper[4680]: E0126 16:07:23.169422 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.170201 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:23 crc kubenswrapper[4680]: E0126 16:07:23.170397 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.170805 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.779600 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/3.log" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.782445 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerStarted","Data":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.782853 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.807348 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podStartSLOduration=97.807333568 podStartE2EDuration="1m37.807333568s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:23.806791533 +0000 UTC m=+118.968063822" watchObservedRunningTime="2026-01-26 16:07:23.807333568 +0000 UTC m=+118.968605827" Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.975936 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fbl6p"] Jan 26 16:07:23 crc kubenswrapper[4680]: I0126 16:07:23.976100 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:23 crc kubenswrapper[4680]: E0126 16:07:23.976249 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:24 crc kubenswrapper[4680]: I0126 16:07:24.169598 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:24 crc kubenswrapper[4680]: E0126 16:07:24.169739 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:24 crc kubenswrapper[4680]: I0126 16:07:24.170257 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:24 crc kubenswrapper[4680]: E0126 16:07:24.170403 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:25 crc kubenswrapper[4680]: E0126 16:07:25.158047 4680 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 16:07:25 crc kubenswrapper[4680]: I0126 16:07:25.169730 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:25 crc kubenswrapper[4680]: E0126 16:07:25.170964 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:25 crc kubenswrapper[4680]: I0126 16:07:25.171189 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:25 crc kubenswrapper[4680]: E0126 16:07:25.171468 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:25 crc kubenswrapper[4680]: E0126 16:07:25.352063 4680 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 16:07:26 crc kubenswrapper[4680]: I0126 16:07:26.168824 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:26 crc kubenswrapper[4680]: I0126 16:07:26.168825 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:26 crc kubenswrapper[4680]: E0126 16:07:26.169307 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:26 crc kubenswrapper[4680]: E0126 16:07:26.169418 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:27 crc kubenswrapper[4680]: I0126 16:07:27.168652 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:27 crc kubenswrapper[4680]: I0126 16:07:27.168652 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:27 crc kubenswrapper[4680]: E0126 16:07:27.169141 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:27 crc kubenswrapper[4680]: E0126 16:07:27.169165 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:28 crc kubenswrapper[4680]: I0126 16:07:28.169312 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:28 crc kubenswrapper[4680]: E0126 16:07:28.169547 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:28 crc kubenswrapper[4680]: I0126 16:07:28.172292 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:28 crc kubenswrapper[4680]: E0126 16:07:28.172440 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:29 crc kubenswrapper[4680]: I0126 16:07:29.169475 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:29 crc kubenswrapper[4680]: I0126 16:07:29.169475 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:29 crc kubenswrapper[4680]: E0126 16:07:29.169627 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:29 crc kubenswrapper[4680]: E0126 16:07:29.169659 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:30 crc kubenswrapper[4680]: I0126 16:07:30.168730 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:30 crc kubenswrapper[4680]: I0126 16:07:30.168773 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:30 crc kubenswrapper[4680]: E0126 16:07:30.168871 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:30 crc kubenswrapper[4680]: E0126 16:07:30.168990 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:30 crc kubenswrapper[4680]: E0126 16:07:30.353179 4680 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 16:07:31 crc kubenswrapper[4680]: I0126 16:07:31.169493 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:31 crc kubenswrapper[4680]: E0126 16:07:31.169693 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:31 crc kubenswrapper[4680]: I0126 16:07:31.169795 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:31 crc kubenswrapper[4680]: E0126 16:07:31.169953 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:32 crc kubenswrapper[4680]: I0126 16:07:32.168720 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:32 crc kubenswrapper[4680]: I0126 16:07:32.168791 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:32 crc kubenswrapper[4680]: E0126 16:07:32.168912 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:32 crc kubenswrapper[4680]: E0126 16:07:32.169063 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:33 crc kubenswrapper[4680]: I0126 16:07:33.169290 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:33 crc kubenswrapper[4680]: E0126 16:07:33.169483 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:33 crc kubenswrapper[4680]: I0126 16:07:33.169748 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:33 crc kubenswrapper[4680]: E0126 16:07:33.169861 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:33 crc kubenswrapper[4680]: I0126 16:07:33.170655 4680 scope.go:117] "RemoveContainer" containerID="baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81" Jan 26 16:07:34 crc kubenswrapper[4680]: I0126 16:07:34.092441 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/1.log" Jan 26 16:07:34 crc kubenswrapper[4680]: I0126 16:07:34.092502 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerStarted","Data":"5565421e31d49f8991f452086b11b6115325b4ee38798808abf5c24b9ff73504"} Jan 26 16:07:34 crc kubenswrapper[4680]: I0126 16:07:34.169502 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:34 crc kubenswrapper[4680]: I0126 16:07:34.169502 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:34 crc kubenswrapper[4680]: E0126 16:07:34.169672 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:07:34 crc kubenswrapper[4680]: E0126 16:07:34.169739 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:07:35 crc kubenswrapper[4680]: I0126 16:07:35.170016 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:35 crc kubenswrapper[4680]: E0126 16:07:35.172457 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:07:35 crc kubenswrapper[4680]: I0126 16:07:35.172582 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:35 crc kubenswrapper[4680]: E0126 16:07:35.172724 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fbl6p" podUID="40816c76-44c8-4161-84f3-b1693d48aeaa" Jan 26 16:07:36 crc kubenswrapper[4680]: I0126 16:07:36.168832 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:36 crc kubenswrapper[4680]: I0126 16:07:36.168894 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:36 crc kubenswrapper[4680]: I0126 16:07:36.171490 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 16:07:36 crc kubenswrapper[4680]: I0126 16:07:36.171746 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 16:07:36 crc kubenswrapper[4680]: I0126 16:07:36.172361 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 16:07:36 crc kubenswrapper[4680]: I0126 16:07:36.172367 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 16:07:37 crc kubenswrapper[4680]: I0126 16:07:37.168689 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:37 crc kubenswrapper[4680]: I0126 16:07:37.168763 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:07:37 crc kubenswrapper[4680]: I0126 16:07:37.171290 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 16:07:37 crc kubenswrapper[4680]: I0126 16:07:37.171623 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.870049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.927545 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-khttt"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.928241 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.930059 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ndf74"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.942894 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w5w9r"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.943489 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.944054 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.946506 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.946715 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.946726 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dqgwn"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.955155 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.955893 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.956679 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.956778 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fbs7f"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.957301 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.958152 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-z2kjp"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.959157 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.960422 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.960796 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.960947 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961101 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961200 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961340 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961425 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961481 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961637 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961742 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961832 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.960953 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.962020 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961643 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.963876 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.964592 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.964965 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961786 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961249 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.965476 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961964 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.961006 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.965886 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.966110 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x6xh2"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.966708 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.967453 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.967651 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.967131 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.967058 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.969475 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.969870 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bjnls"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.970217 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.970403 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.970742 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.980185 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.980454 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.980602 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.980717 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.991371 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.991830 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.993773 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.998158 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fgknk"] Jan 26 16:07:40 crc kubenswrapper[4680]: I0126 16:07:40.998684 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.001642 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.002879 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.004651 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.004976 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.009537 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.010866 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.011412 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.013423 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.013667 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.018299 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.018503 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.019025 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.019493 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.019807 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.020114 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.020311 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.019611 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.020615 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.020816 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.021011 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.021325 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.021564 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.021879 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.022288 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.023703 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-config\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.023924 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/17623135-9fc0-4d3f-a017-bcb3196bede3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024044 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/937d1b38-2a29-4846-bb8c-7995c583ac89-trusted-ca\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024205 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsr2s\" (UniqueName: \"kubernetes.io/projected/fac02262-7a92-4d28-9b92-87da9e2ba68e-kube-api-access-lsr2s\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024314 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcppf\" (UniqueName: \"kubernetes.io/projected/debda3fd-3a5a-4f25-b732-90eb3bade1d4-kube-api-access-gcppf\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024411 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937d1b38-2a29-4846-bb8c-7995c583ac89-config\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024510 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024617 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vmv\" (UniqueName: \"kubernetes.io/projected/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-kube-api-access-z8vmv\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024718 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/debda3fd-3a5a-4f25-b732-90eb3bade1d4-audit-dir\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.024940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-audit-dir\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.025048 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmtv\" (UniqueName: \"kubernetes.io/projected/0e171291-ab8f-40f8-a0e3-2ddf10db9732-kube-api-access-hfmtv\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.025194 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnjf4\" (UniqueName: \"kubernetes.io/projected/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-kube-api-access-rnjf4\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.025299 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.031299 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9v6\" (UniqueName: \"kubernetes.io/projected/84e58c16-df02-4857-aba5-434321c87141-kube-api-access-bq9v6\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.031350 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-oauth-config\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.031385 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-ca\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.031415 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.026333 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.032236 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-khttt"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.032374 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.033898 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034168 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrq2q\" (UniqueName: \"kubernetes.io/projected/f93ff197-4612-44d8-b67e-c98ae2906899-kube-api-access-jrq2q\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034257 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-client\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034292 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-audit-policies\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034312 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17623135-9fc0-4d3f-a017-bcb3196bede3-images\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034332 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-config\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034415 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-serving-cert\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034431 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fac02262-7a92-4d28-9b92-87da9e2ba68e-machine-approver-tls\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-serving-cert\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034470 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-service-ca-bundle\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034514 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034537 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-serving-cert\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034552 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-image-import-ca\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034572 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-client-ca\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034612 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac02262-7a92-4d28-9b92-87da9e2ba68e-auth-proxy-config\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034632 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034647 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-encryption-config\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034684 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034704 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034736 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034781 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-console-config\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034797 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/937d1b38-2a29-4846-bb8c-7995c583ac89-serving-cert\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034838 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-audit-policies\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034872 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034892 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a10c9f7-b077-4828-a066-d5c07b773069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034912 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84e58c16-df02-4857-aba5-434321c87141-audit-dir\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.034952 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-config\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035057 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-client-ca\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035094 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-service-ca\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035116 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-config\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035135 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-etcd-client\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035169 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035206 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ff197-4612-44d8-b67e-c98ae2906899-serving-cert\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035231 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-audit\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035252 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclmp\" (UniqueName: \"kubernetes.io/projected/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-kube-api-access-cclmp\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035280 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035299 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035342 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-encryption-config\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035377 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rqm\" (UniqueName: \"kubernetes.io/projected/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-kube-api-access-c5rqm\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035398 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-469c9\" (UniqueName: \"kubernetes.io/projected/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-kube-api-access-469c9\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035415 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-serving-cert\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035804 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.035853 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036199 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnx8p\" (UniqueName: \"kubernetes.io/projected/937d1b38-2a29-4846-bb8c-7995c583ac89-kube-api-access-gnx8p\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036256 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-oauth-serving-cert\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036283 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036302 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-service-ca\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036320 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-etcd-serving-ca\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036346 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcksc\" (UniqueName: \"kubernetes.io/projected/17623135-9fc0-4d3f-a017-bcb3196bede3-kube-api-access-gcksc\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036362 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac02262-7a92-4d28-9b92-87da9e2ba68e-config\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036394 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-etcd-client\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036431 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036450 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036468 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e171291-ab8f-40f8-a0e3-2ddf10db9732-serving-cert\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036490 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-trusted-ca-bundle\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036522 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/debda3fd-3a5a-4f25-b732-90eb3bade1d4-node-pullsecrets\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036545 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036563 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a10c9f7-b077-4828-a066-d5c07b773069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036594 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036622 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036773 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17623135-9fc0-4d3f-a017-bcb3196bede3-config\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036806 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2vk\" (UniqueName: \"kubernetes.io/projected/7a10c9f7-b077-4828-a066-d5c07b773069-kube-api-access-xg2vk\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036851 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-config\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036874 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-serving-cert\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pq4\" (UniqueName: \"kubernetes.io/projected/9f58b269-9b27-441e-bd05-b99b435c29c9-kube-api-access-l4pq4\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.036921 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.044259 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.044499 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.044770 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.044870 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.045001 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.046215 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.046424 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.046661 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.046788 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.046929 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.047192 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.047933 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.048251 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.053062 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.053374 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.053457 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.053651 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.053828 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.054362 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.054549 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.054803 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.054900 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.059775 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.059980 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.060876 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.061977 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.091326 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.091378 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.091487 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.091542 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.091626 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.091816 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.092200 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.094136 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.094258 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.096913 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.097090 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.097349 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.097519 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.098061 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.098262 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.098414 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.098610 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.099008 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.102746 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.106524 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.115988 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.116108 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.117157 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.117733 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.117663 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.118686 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ndf74"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.118888 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.119565 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.128109 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.128318 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.130220 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.132716 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pzz4v"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.133421 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.134772 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w8tz5"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.135243 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.135497 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.135571 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.136207 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.137317 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139103 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-config\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139132 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84e58c16-df02-4857-aba5-434321c87141-audit-dir\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139156 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-client-ca\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139171 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-service-ca\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139195 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-config\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139226 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-etcd-client\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139240 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139256 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ff197-4612-44d8-b67e-c98ae2906899-serving-cert\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139271 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-audit\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139285 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cclmp\" (UniqueName: \"kubernetes.io/projected/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-kube-api-access-cclmp\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139305 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139320 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rqm\" (UniqueName: \"kubernetes.io/projected/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-kube-api-access-c5rqm\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139337 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139362 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-encryption-config\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139377 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-469c9\" (UniqueName: \"kubernetes.io/projected/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-kube-api-access-469c9\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139392 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-serving-cert\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139407 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139420 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139449 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnx8p\" (UniqueName: \"kubernetes.io/projected/937d1b38-2a29-4846-bb8c-7995c583ac89-kube-api-access-gnx8p\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139463 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-oauth-serving-cert\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139479 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139496 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-service-ca\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139514 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-etcd-serving-ca\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139532 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41f82b70-3d6b-4c74-9655-08e4552ee8b4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139552 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcksc\" (UniqueName: \"kubernetes.io/projected/17623135-9fc0-4d3f-a017-bcb3196bede3-kube-api-access-gcksc\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139567 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac02262-7a92-4d28-9b92-87da9e2ba68e-config\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139583 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-etcd-client\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139598 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139629 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139649 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e171291-ab8f-40f8-a0e3-2ddf10db9732-serving-cert\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139690 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-trusted-ca-bundle\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139711 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/debda3fd-3a5a-4f25-b732-90eb3bade1d4-node-pullsecrets\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139728 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a10c9f7-b077-4828-a066-d5c07b773069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139749 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99923803-37e9-445f-bba0-d140e9123e83-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139767 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139831 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17623135-9fc0-4d3f-a017-bcb3196bede3-config\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139855 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2vk\" (UniqueName: \"kubernetes.io/projected/7a10c9f7-b077-4828-a066-d5c07b773069-kube-api-access-xg2vk\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139875 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-config\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139892 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-serving-cert\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139909 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4pq4\" (UniqueName: \"kubernetes.io/projected/9f58b269-9b27-441e-bd05-b99b435c29c9-kube-api-access-l4pq4\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139930 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139957 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f82b70-3d6b-4c74-9655-08e4552ee8b4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139977 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-config\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.139995 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/17623135-9fc0-4d3f-a017-bcb3196bede3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140018 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/937d1b38-2a29-4846-bb8c-7995c583ac89-trusted-ca\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140042 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsr2s\" (UniqueName: \"kubernetes.io/projected/fac02262-7a92-4d28-9b92-87da9e2ba68e-kube-api-access-lsr2s\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140080 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937d1b38-2a29-4846-bb8c-7995c583ac89-config\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140105 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcppf\" (UniqueName: \"kubernetes.io/projected/debda3fd-3a5a-4f25-b732-90eb3bade1d4-kube-api-access-gcppf\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140130 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsntx\" (UniqueName: \"kubernetes.io/projected/9e3eb885-ee39-406a-a004-27cab65b02f8-kube-api-access-fsntx\") pod \"cluster-samples-operator-665b6dd947-9496p\" (UID: \"9e3eb885-ee39-406a-a004-27cab65b02f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140151 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-audit\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140155 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140200 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vmv\" (UniqueName: \"kubernetes.io/projected/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-kube-api-access-z8vmv\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140221 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/debda3fd-3a5a-4f25-b732-90eb3bade1d4-audit-dir\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140248 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-audit-dir\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnjf4\" (UniqueName: \"kubernetes.io/projected/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-kube-api-access-rnjf4\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140282 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmtv\" (UniqueName: \"kubernetes.io/projected/0e171291-ab8f-40f8-a0e3-2ddf10db9732-kube-api-access-hfmtv\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140300 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140316 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9v6\" (UniqueName: \"kubernetes.io/projected/84e58c16-df02-4857-aba5-434321c87141-kube-api-access-bq9v6\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140332 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-oauth-config\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140351 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e3eb885-ee39-406a-a004-27cab65b02f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9496p\" (UID: \"9e3eb885-ee39-406a-a004-27cab65b02f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140369 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-ca\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140388 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140404 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140424 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpmfg\" (UniqueName: \"kubernetes.io/projected/99923803-37e9-445f-bba0-d140e9123e83-kube-api-access-xpmfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrq2q\" (UniqueName: \"kubernetes.io/projected/f93ff197-4612-44d8-b67e-c98ae2906899-kube-api-access-jrq2q\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140457 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-client\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140474 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-audit-policies\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140489 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17623135-9fc0-4d3f-a017-bcb3196bede3-images\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140504 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-config\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140520 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-service-ca-bundle\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140535 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-serving-cert\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140550 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fac02262-7a92-4d28-9b92-87da9e2ba68e-machine-approver-tls\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140565 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-serving-cert\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140583 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdjq\" (UniqueName: \"kubernetes.io/projected/643d9c97-4160-40e2-9f56-e200526e2a8b-kube-api-access-9qdjq\") pod \"downloads-7954f5f757-fgknk\" (UID: \"643d9c97-4160-40e2-9f56-e200526e2a8b\") " pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140600 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99923803-37e9-445f-bba0-d140e9123e83-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140636 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-serving-cert\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140651 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-image-import-ca\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f82b70-3d6b-4c74-9655-08e4552ee8b4-config\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140684 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-client-ca\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140707 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-encryption-config\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140728 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac02262-7a92-4d28-9b92-87da9e2ba68e-auth-proxy-config\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140749 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140776 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140791 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140815 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140831 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-console-config\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/937d1b38-2a29-4846-bb8c-7995c583ac89-serving-cert\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140863 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a10c9f7-b077-4828-a066-d5c07b773069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140879 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-audit-policies\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.140893 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.141059 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.158195 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.159586 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.162654 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.163063 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.168252 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.174474 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.176498 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-oauth-serving-cert\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.177568 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.178123 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-etcd-serving-ca\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.178456 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.178879 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac02262-7a92-4d28-9b92-87da9e2ba68e-config\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.180451 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-serving-cert\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.185151 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/debda3fd-3a5a-4f25-b732-90eb3bade1d4-node-pullsecrets\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.185831 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-service-ca\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.187544 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.178772 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.191522 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.190785 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-etcd-client\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.180415 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.181904 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.196919 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-config\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.197522 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.197793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.199168 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.199462 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-encryption-config\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.199571 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17623135-9fc0-4d3f-a017-bcb3196bede3-images\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.199582 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-audit-policies\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.200432 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-trusted-ca-bundle\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.201320 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.202034 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-encryption-config\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.202778 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.203142 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.203451 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.203491 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a10c9f7-b077-4828-a066-d5c07b773069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.203619 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.203660 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/debda3fd-3a5a-4f25-b732-90eb3bade1d4-audit-dir\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.203804 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.204294 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-config\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.204445 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-console-config\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.204478 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-audit-dir\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.204679 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.205308 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a10c9f7-b077-4828-a066-d5c07b773069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.205413 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.206196 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-audit-policies\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.206700 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-config\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.207424 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-ca\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.208123 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/937d1b38-2a29-4846-bb8c-7995c583ac89-trusted-ca\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.208643 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.208720 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-etcd-client\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209018 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209059 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17623135-9fc0-4d3f-a017-bcb3196bede3-config\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209183 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209585 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-service-ca\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209656 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937d1b38-2a29-4846-bb8c-7995c583ac89-config\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209810 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.210419 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.210443 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-serving-cert\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.210807 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.212121 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-config\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.213129 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-serving-cert\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.213555 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/17623135-9fc0-4d3f-a017-bcb3196bede3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.213573 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ff197-4612-44d8-b67e-c98ae2906899-service-ca-bundle\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.213992 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac02262-7a92-4d28-9b92-87da9e2ba68e-auth-proxy-config\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.214366 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e171291-ab8f-40f8-a0e3-2ddf10db9732-config\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.214573 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-client-ca\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.214619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.214835 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fac02262-7a92-4d28-9b92-87da9e2ba68e-machine-approver-tls\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.215265 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/937d1b38-2a29-4846-bb8c-7995c583ac89-serving-cert\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.215536 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e171291-ab8f-40f8-a0e3-2ddf10db9732-etcd-client\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.215884 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.216143 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.216323 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.216462 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/debda3fd-3a5a-4f25-b732-90eb3bade1d4-image-import-ca\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.216642 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debda3fd-3a5a-4f25-b732-90eb3bade1d4-serving-cert\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.216736 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84e58c16-df02-4857-aba5-434321c87141-audit-dir\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.209744 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-client-ca\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.217609 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e171291-ab8f-40f8-a0e3-2ddf10db9732-serving-cert\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.217666 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-oauth-config\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.217884 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.218599 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.218709 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ff197-4612-44d8-b67e-c98ae2906899-serving-cert\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.219739 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vvkm4"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.220243 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.220586 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.221057 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.221229 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.221530 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dqgwn"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.221548 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kz692"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.221790 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.222009 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9kzqd"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.222102 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.222583 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z2kjp"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.222606 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bjnls"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.222619 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hhs6d"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.222758 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.223278 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w5w9r"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.223313 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fbs7f"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.223324 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lprxn"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.223509 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.225196 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.225253 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.225821 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-46pbk"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.226255 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.226532 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.226995 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.227694 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.228933 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.229381 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.231264 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.231612 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.233421 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-serving-cert\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.234056 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.235260 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.237470 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x6xh2"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.238680 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.239834 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vvkm4"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.240916 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241361 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdjq\" (UniqueName: \"kubernetes.io/projected/643d9c97-4160-40e2-9f56-e200526e2a8b-kube-api-access-9qdjq\") pod \"downloads-7954f5f757-fgknk\" (UID: \"643d9c97-4160-40e2-9f56-e200526e2a8b\") " pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241391 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99923803-37e9-445f-bba0-d140e9123e83-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241414 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f82b70-3d6b-4c74-9655-08e4552ee8b4-config\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241502 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41f82b70-3d6b-4c74-9655-08e4552ee8b4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241542 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99923803-37e9-445f-bba0-d140e9123e83-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241598 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f82b70-3d6b-4c74-9655-08e4552ee8b4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241627 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsntx\" (UniqueName: \"kubernetes.io/projected/9e3eb885-ee39-406a-a004-27cab65b02f8-kube-api-access-fsntx\") pod \"cluster-samples-operator-665b6dd947-9496p\" (UID: \"9e3eb885-ee39-406a-a004-27cab65b02f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241685 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e3eb885-ee39-406a-a004-27cab65b02f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9496p\" (UID: \"9e3eb885-ee39-406a-a004-27cab65b02f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.241710 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpmfg\" (UniqueName: \"kubernetes.io/projected/99923803-37e9-445f-bba0-d140e9123e83-kube-api-access-xpmfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.242223 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.242404 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99923803-37e9-445f-bba0-d140e9123e83-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.243394 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kz692"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.244304 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99923803-37e9-445f-bba0-d140e9123e83-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.244602 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w8tz5"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.244926 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e3eb885-ee39-406a-a004-27cab65b02f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9496p\" (UID: \"9e3eb885-ee39-406a-a004-27cab65b02f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.246359 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.247782 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.250582 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.250749 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.252061 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6slhj"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.252530 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.253140 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9dh6"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.254852 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.254952 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.255191 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fgknk"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.257261 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.258238 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hhs6d"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.260430 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pzz4v"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.261712 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.264120 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.264889 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.274615 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.275568 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.276893 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.278660 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.281785 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.282908 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lprxn"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.283985 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-46pbk"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.284768 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f82b70-3d6b-4c74-9655-08e4552ee8b4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.285038 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.286036 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.287031 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9dh6"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.288186 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.289134 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6s8qz"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.289840 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.290076 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6s8qz"] Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.290461 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.292879 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f82b70-3d6b-4c74-9655-08e4552ee8b4-config\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.310330 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.331385 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.350409 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.370954 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.410877 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.430688 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.450174 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.470505 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.491411 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.516906 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.530958 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.550806 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.570092 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.590821 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.610753 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.630811 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.650646 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.670763 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.690617 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.710832 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.731433 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.751039 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.770101 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.790728 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.830939 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnx8p\" (UniqueName: \"kubernetes.io/projected/937d1b38-2a29-4846-bb8c-7995c583ac89-kube-api-access-gnx8p\") pod \"console-operator-58897d9998-x6xh2\" (UID: \"937d1b38-2a29-4846-bb8c-7995c583ac89\") " pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.847325 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcksc\" (UniqueName: \"kubernetes.io/projected/17623135-9fc0-4d3f-a017-bcb3196bede3-kube-api-access-gcksc\") pod \"machine-api-operator-5694c8668f-fbs7f\" (UID: \"17623135-9fc0-4d3f-a017-bcb3196bede3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.870631 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cclmp\" (UniqueName: \"kubernetes.io/projected/e118dee6-50de-4dbc-bf0c-ddda27bd5da5-kube-api-access-cclmp\") pod \"apiserver-7bbb656c7d-7q7t5\" (UID: \"e118dee6-50de-4dbc-bf0c-ddda27bd5da5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.885982 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rqm\" (UniqueName: \"kubernetes.io/projected/b97f4ad2-2288-4dd8-a30f-fb5c407855a3-kube-api-access-c5rqm\") pod \"openshift-config-operator-7777fb866f-9bv9l\" (UID: \"b97f4ad2-2288-4dd8-a30f-fb5c407855a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.904723 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-469c9\" (UniqueName: \"kubernetes.io/projected/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-kube-api-access-469c9\") pod \"controller-manager-879f6c89f-khttt\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.926340 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9v6\" (UniqueName: \"kubernetes.io/projected/84e58c16-df02-4857-aba5-434321c87141-kube-api-access-bq9v6\") pod \"oauth-openshift-558db77b4-ndf74\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.926489 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.949027 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsr2s\" (UniqueName: \"kubernetes.io/projected/fac02262-7a92-4d28-9b92-87da9e2ba68e-kube-api-access-lsr2s\") pod \"machine-approver-56656f9798-8bdz6\" (UID: \"fac02262-7a92-4d28-9b92-87da9e2ba68e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.963759 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.985406 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrq2q\" (UniqueName: \"kubernetes.io/projected/f93ff197-4612-44d8-b67e-c98ae2906899-kube-api-access-jrq2q\") pod \"authentication-operator-69f744f599-bjnls\" (UID: \"f93ff197-4612-44d8-b67e-c98ae2906899\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:41 crc kubenswrapper[4680]: I0126 16:07:41.999687 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.005213 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnjf4\" (UniqueName: \"kubernetes.io/projected/7e3521bb-af73-49aa-ac76-8b7dcabcdeba-kube-api-access-rnjf4\") pod \"cluster-image-registry-operator-dc59b4c8b-9m5h9\" (UID: \"7e3521bb-af73-49aa-ac76-8b7dcabcdeba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.038821 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4pq4\" (UniqueName: \"kubernetes.io/projected/9f58b269-9b27-441e-bd05-b99b435c29c9-kube-api-access-l4pq4\") pod \"console-f9d7485db-z2kjp\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.045584 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcppf\" (UniqueName: \"kubernetes.io/projected/debda3fd-3a5a-4f25-b732-90eb3bade1d4-kube-api-access-gcppf\") pod \"apiserver-76f77b778f-dqgwn\" (UID: \"debda3fd-3a5a-4f25-b732-90eb3bade1d4\") " pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.075660 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmtv\" (UniqueName: \"kubernetes.io/projected/0e171291-ab8f-40f8-a0e3-2ddf10db9732-kube-api-access-hfmtv\") pod \"etcd-operator-b45778765-w5w9r\" (UID: \"0e171291-ab8f-40f8-a0e3-2ddf10db9732\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.082964 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vmv\" (UniqueName: \"kubernetes.io/projected/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-kube-api-access-z8vmv\") pod \"route-controller-manager-6576b87f9c-z6m6v\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.106772 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.111394 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.112619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2vk\" (UniqueName: \"kubernetes.io/projected/7a10c9f7-b077-4828-a066-d5c07b773069-kube-api-access-xg2vk\") pod \"openshift-apiserver-operator-796bbdcf4f-mlldw\" (UID: \"7a10c9f7-b077-4828-a066-d5c07b773069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.121503 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.127323 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.130510 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.135314 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.150699 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.172215 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fbs7f"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.172891 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.181745 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.187330 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.192503 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.197952 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.201983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.210358 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.217568 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.232009 4680 request.go:700] Waited for 1.015391693s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.235141 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.248147 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.251428 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.251476 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x6xh2"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.253838 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.264196 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17623135_9fc0_4d3f_a017_bcb3196bede3.slice/crio-adb2c5e81b0ebb3131af3e86fd21c20dbf2d105b27a617d91d9d6ce03a36ba6c WatchSource:0}: Error finding container adb2c5e81b0ebb3131af3e86fd21c20dbf2d105b27a617d91d9d6ce03a36ba6c: Status 404 returned error can't find the container with id adb2c5e81b0ebb3131af3e86fd21c20dbf2d105b27a617d91d9d6ce03a36ba6c Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.274717 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.290400 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.316023 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.331053 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.336494 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.373630 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bjnls"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.374744 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.396025 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.411806 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.431432 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.450838 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.471746 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.490932 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.511206 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.531364 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.552561 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.575546 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.595408 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.611356 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.632420 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.650381 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.671543 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.686588 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-khttt"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.690482 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.696318 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1372a4ea_cf38_4ec7_afe5_90e7e1d22dca.slice/crio-f7e4ec26488ac4a5a7a4a2a0314998d2893d858f0393b2dac8c61e4017a83a6d WatchSource:0}: Error finding container f7e4ec26488ac4a5a7a4a2a0314998d2893d858f0393b2dac8c61e4017a83a6d: Status 404 returned error can't find the container with id f7e4ec26488ac4a5a7a4a2a0314998d2893d858f0393b2dac8c61e4017a83a6d Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.711535 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.730663 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.751212 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.772270 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.773499 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w5w9r"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.776514 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5"] Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.790532 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode118dee6_50de_4dbc_bf0c_ddda27bd5da5.slice/crio-d2f802501f88945d4eb52885666181be47d687d5901ff904bb6684e21b540e92 WatchSource:0}: Error finding container d2f802501f88945d4eb52885666181be47d687d5901ff904bb6684e21b540e92: Status 404 returned error can't find the container with id d2f802501f88945d4eb52885666181be47d687d5901ff904bb6684e21b540e92 Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.793012 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.810788 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.831602 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.842793 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ndf74"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.849321 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.850735 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.854965 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.862188 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v"] Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.868259 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e3521bb_af73_49aa_ac76_8b7dcabcdeba.slice/crio-712f0da6677753d9e66276b3198087f7da0934ab01def3799d898f5e048e9368 WatchSource:0}: Error finding container 712f0da6677753d9e66276b3198087f7da0934ab01def3799d898f5e048e9368: Status 404 returned error can't find the container with id 712f0da6677753d9e66276b3198087f7da0934ab01def3799d898f5e048e9368 Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.868662 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e58c16_df02_4857_aba5_434321c87141.slice/crio-49d5d95c72ca62043f47a17d89b03c442b7f32e504566cca4c0819489a2f9dd5 WatchSource:0}: Error finding container 49d5d95c72ca62043f47a17d89b03c442b7f32e504566cca4c0819489a2f9dd5: Status 404 returned error can't find the container with id 49d5d95c72ca62043f47a17d89b03c442b7f32e504566cca4c0819489a2f9dd5 Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.870712 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.892298 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.892945 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36cf56f2_0fb2_4172_be72_a6c8097a2bf5.slice/crio-10a3faa4db7c84c939e19af811f5eb69ee9c537a8abbc2ff90fc6794fac4b978 WatchSource:0}: Error finding container 10a3faa4db7c84c939e19af811f5eb69ee9c537a8abbc2ff90fc6794fac4b978: Status 404 returned error can't find the container with id 10a3faa4db7c84c939e19af811f5eb69ee9c537a8abbc2ff90fc6794fac4b978 Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.910451 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.921738 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z2kjp"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.931206 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.931850 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dqgwn"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.945685 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw"] Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.950796 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.960357 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f58b269_9b27_441e_bd05_b99b435c29c9.slice/crio-56ca29dd55bff735203b3b3f30b15e5e0de1db13460dbbbb64103b87fd33d604 WatchSource:0}: Error finding container 56ca29dd55bff735203b3b3f30b15e5e0de1db13460dbbbb64103b87fd33d604: Status 404 returned error can't find the container with id 56ca29dd55bff735203b3b3f30b15e5e0de1db13460dbbbb64103b87fd33d604 Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.961144 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddebda3fd_3a5a_4f25_b732_90eb3bade1d4.slice/crio-b23d4f797f9ac2e944e9ce01f057282da2b20f1746c51246e2b93d1f8b65ee61 WatchSource:0}: Error finding container b23d4f797f9ac2e944e9ce01f057282da2b20f1746c51246e2b93d1f8b65ee61: Status 404 returned error can't find the container with id b23d4f797f9ac2e944e9ce01f057282da2b20f1746c51246e2b93d1f8b65ee61 Jan 26 16:07:42 crc kubenswrapper[4680]: W0126 16:07:42.970166 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a10c9f7_b077_4828_a066_d5c07b773069.slice/crio-1e1c3eda4dffa83d26e8c479ebf34984199957a099699bfc0f1e1bb3efa867d3 WatchSource:0}: Error finding container 1e1c3eda4dffa83d26e8c479ebf34984199957a099699bfc0f1e1bb3efa867d3: Status 404 returned error can't find the container with id 1e1c3eda4dffa83d26e8c479ebf34984199957a099699bfc0f1e1bb3efa867d3 Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.971680 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:07:42 crc kubenswrapper[4680]: I0126 16:07:42.992284 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.010392 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.042033 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.052549 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.076650 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.108371 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpmfg\" (UniqueName: \"kubernetes.io/projected/99923803-37e9-445f-bba0-d140e9123e83-kube-api-access-xpmfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-nw2gk\" (UID: \"99923803-37e9-445f-bba0-d140e9123e83\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.131951 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsntx\" (UniqueName: \"kubernetes.io/projected/9e3eb885-ee39-406a-a004-27cab65b02f8-kube-api-access-fsntx\") pod \"cluster-samples-operator-665b6dd947-9496p\" (UID: \"9e3eb885-ee39-406a-a004-27cab65b02f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.143346 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" event={"ID":"7e3521bb-af73-49aa-ac76-8b7dcabcdeba","Type":"ContainerStarted","Data":"712f0da6677753d9e66276b3198087f7da0934ab01def3799d898f5e048e9368"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.144714 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdjq\" (UniqueName: \"kubernetes.io/projected/643d9c97-4160-40e2-9f56-e200526e2a8b-kube-api-access-9qdjq\") pod \"downloads-7954f5f757-fgknk\" (UID: \"643d9c97-4160-40e2-9f56-e200526e2a8b\") " pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.156622 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" event={"ID":"7a10c9f7-b077-4828-a066-d5c07b773069","Type":"ContainerStarted","Data":"1e1c3eda4dffa83d26e8c479ebf34984199957a099699bfc0f1e1bb3efa867d3"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.165268 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41f82b70-3d6b-4c74-9655-08e4552ee8b4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qp2cm\" (UID: \"41f82b70-3d6b-4c74-9655-08e4552ee8b4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.166255 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z2kjp" event={"ID":"9f58b269-9b27-441e-bd05-b99b435c29c9","Type":"ContainerStarted","Data":"56ca29dd55bff735203b3b3f30b15e5e0de1db13460dbbbb64103b87fd33d604"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.168909 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" event={"ID":"e118dee6-50de-4dbc-bf0c-ddda27bd5da5","Type":"ContainerStarted","Data":"d2f802501f88945d4eb52885666181be47d687d5901ff904bb6684e21b540e92"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.170455 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.179583 4680 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-khttt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.179626 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" podUID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.190147 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201523 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" event={"ID":"17623135-9fc0-4d3f-a017-bcb3196bede3","Type":"ContainerStarted","Data":"fbcc29796b120bd3e6d331343297fbd852a30a41c46de855652ec746fa41b070"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201570 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201581 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" event={"ID":"17623135-9fc0-4d3f-a017-bcb3196bede3","Type":"ContainerStarted","Data":"d2ca926eba0ebd7d8cc65257b2fc20ddaa321830f8bd612c07087fd4906cf82c"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201591 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201601 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" event={"ID":"17623135-9fc0-4d3f-a017-bcb3196bede3","Type":"ContainerStarted","Data":"adb2c5e81b0ebb3131af3e86fd21c20dbf2d105b27a617d91d9d6ce03a36ba6c"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201674 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" event={"ID":"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca","Type":"ContainerStarted","Data":"a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" event={"ID":"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca","Type":"ContainerStarted","Data":"f7e4ec26488ac4a5a7a4a2a0314998d2893d858f0393b2dac8c61e4017a83a6d"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201700 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" event={"ID":"b97f4ad2-2288-4dd8-a30f-fb5c407855a3","Type":"ContainerStarted","Data":"43bc7aff41378986d7fdf2752c3f9779fc10c1e3141b4982a7b2fc598c58fbb3"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201710 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" event={"ID":"b97f4ad2-2288-4dd8-a30f-fb5c407855a3","Type":"ContainerStarted","Data":"94d580179ba3e47555a08504d0888132b73de35ba98f990bb8a4db22aff0fbc7"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201732 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" event={"ID":"fac02262-7a92-4d28-9b92-87da9e2ba68e","Type":"ContainerStarted","Data":"d68159ab5472b53647311d0827e91b2dd682156c3efb75e2a7ac343410e62702"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201741 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" event={"ID":"fac02262-7a92-4d28-9b92-87da9e2ba68e","Type":"ContainerStarted","Data":"76bf85b40e4c5fee66fd5a3888ac9dc019964b09c5eee743f1f1f4e8212615b3"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201749 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" event={"ID":"f93ff197-4612-44d8-b67e-c98ae2906899","Type":"ContainerStarted","Data":"95cf0628de78b55d32e16747ae5424f05820478bd01254d2b31784d2ada474af"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" event={"ID":"f93ff197-4612-44d8-b67e-c98ae2906899","Type":"ContainerStarted","Data":"69dfe6b8e6b10234f0ee5e363d28b3b487dc3c1d4becdc6cc65e8b9ca347d86a"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" event={"ID":"84e58c16-df02-4857-aba5-434321c87141","Type":"ContainerStarted","Data":"49d5d95c72ca62043f47a17d89b03c442b7f32e504566cca4c0819489a2f9dd5"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.201777 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" event={"ID":"36cf56f2-0fb2-4172-be72-a6c8097a2bf5","Type":"ContainerStarted","Data":"10a3faa4db7c84c939e19af811f5eb69ee9c537a8abbc2ff90fc6794fac4b978"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.205340 4680 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z6m6v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.205378 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" podUID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.209926 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.210542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" event={"ID":"debda3fd-3a5a-4f25-b732-90eb3bade1d4","Type":"ContainerStarted","Data":"b23d4f797f9ac2e944e9ce01f057282da2b20f1746c51246e2b93d1f8b65ee61"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.212802 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" event={"ID":"0e171291-ab8f-40f8-a0e3-2ddf10db9732","Type":"ContainerStarted","Data":"2d33be1e768418a82f9e953414e0bcbad98e1005fadd2727b776a12223b8a986"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.217134 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" event={"ID":"937d1b38-2a29-4846-bb8c-7995c583ac89","Type":"ContainerStarted","Data":"8cd0ab1f642d2f6e6646c9940e29e36b43b29c3eb962012a640f557c7dae530a"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.217356 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" event={"ID":"937d1b38-2a29-4846-bb8c-7995c583ac89","Type":"ContainerStarted","Data":"8c93e605695acd2b588d89aaecb6637d5bf16cb47145a7a8eea1ee7b77db87cb"} Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.217812 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.219950 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-x6xh2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.219992 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" podUID="937d1b38-2a29-4846-bb8c-7995c583ac89" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.233626 4680 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.249258 4680 request.go:700] Waited for 1.994090238s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.252612 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.271981 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.290530 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.311820 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.331643 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.343363 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.349782 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.355994 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.357794 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.366761 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.497580 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.497964 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfe0c02d-2802-4506-8adf-dcf3ff96e398-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.497985 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede4cd45-4ef2-4302-8b01-db0e21c400a4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.498009 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds47p\" (UniqueName: \"kubernetes.io/projected/453f2d30-8c6f-4626-9078-b34554f72d7b-kube-api-access-ds47p\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.498028 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff9c6c4c-e626-48f7-ae69-c50da92766d5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.498044 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vh2\" (UniqueName: \"kubernetes.io/projected/f5bdf39f-15a1-4f08-b536-73387288994b-kube-api-access-v5vh2\") pod \"migrator-59844c95c7-vrdt7\" (UID: \"f5bdf39f-15a1-4f08-b536-73387288994b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.498413 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-trusted-ca\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.498796 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-tls\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.499194 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/210a0892-bb13-4734-8783-13d4cb76fb4d-images\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.499579 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:43.999564971 +0000 UTC m=+139.160837240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.499683 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede4cd45-4ef2-4302-8b01-db0e21c400a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.499753 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4r6t\" (UniqueName: \"kubernetes.io/projected/210a0892-bb13-4734-8783-13d4cb76fb4d-kube-api-access-g4r6t\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.500063 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2915ee10-fa21-477c-8bae-13bae7d3ab0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.500118 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e264ebe-f53f-4365-90ca-4e7bbe7bd885-metrics-tls\") pod \"dns-operator-744455d44c-w8tz5\" (UID: \"5e264ebe-f53f-4365-90ca-4e7bbe7bd885\") " pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.500137 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k295\" (UniqueName: \"kubernetes.io/projected/5ebd5c1d-3782-425e-8bdd-893779283f56-kube-api-access-8k295\") pod \"control-plane-machine-set-operator-78cbb6b69f-cz2m2\" (UID: \"5ebd5c1d-3782-425e-8bdd-893779283f56\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.500420 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfe3694-fc1a-4262-85ea-413fad222b35-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.500983 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ebd5c1d-3782-425e-8bdd-893779283f56-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cz2m2\" (UID: \"5ebd5c1d-3782-425e-8bdd-893779283f56\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.501042 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/453f2d30-8c6f-4626-9078-b34554f72d7b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.501138 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2915ee10-fa21-477c-8bae-13bae7d3ab0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.501225 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhq4\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-kube-api-access-6jhq4\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.501656 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/210a0892-bb13-4734-8783-13d4cb76fb4d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.502155 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff9c6c4c-e626-48f7-ae69-c50da92766d5-metrics-tls\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507122 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/210a0892-bb13-4734-8783-13d4cb76fb4d-proxy-tls\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507205 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede4cd45-4ef2-4302-8b01-db0e21c400a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507270 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mh5g\" (UniqueName: \"kubernetes.io/projected/cfe0c02d-2802-4506-8adf-dcf3ff96e398-kube-api-access-5mh5g\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507292 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cfe0c02d-2802-4506-8adf-dcf3ff96e398-proxy-tls\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507324 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfe3694-fc1a-4262-85ea-413fad222b35-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507354 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/453f2d30-8c6f-4626-9078-b34554f72d7b-srv-cert\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507372 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vnp\" (UniqueName: \"kubernetes.io/projected/ff9c6c4c-e626-48f7-ae69-c50da92766d5-kube-api-access-k2vnp\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507404 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff9c6c4c-e626-48f7-ae69-c50da92766d5-trusted-ca\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507422 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-bound-sa-token\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507479 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql46k\" (UniqueName: \"kubernetes.io/projected/5e264ebe-f53f-4365-90ca-4e7bbe7bd885-kube-api-access-ql46k\") pod \"dns-operator-744455d44c-w8tz5\" (UID: \"5e264ebe-f53f-4365-90ca-4e7bbe7bd885\") " pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507496 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pznt2\" (UniqueName: \"kubernetes.io/projected/2915ee10-fa21-477c-8bae-13bae7d3ab0b-kube-api-access-pznt2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.507516 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-certificates\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.608509 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.608912 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52101413-4b6f-4b34-bbfc-27d16b75b2a1-secret-volume\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.608935 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/453f2d30-8c6f-4626-9078-b34554f72d7b-srv-cert\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.608951 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vnp\" (UniqueName: \"kubernetes.io/projected/ff9c6c4c-e626-48f7-ae69-c50da92766d5-kube-api-access-k2vnp\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.608968 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff9c6c4c-e626-48f7-ae69-c50da92766d5-trusted-ca\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609010 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-stats-auth\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609026 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-signing-key\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609046 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-bound-sa-token\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609063 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-signing-cabundle\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609106 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql46k\" (UniqueName: \"kubernetes.io/projected/5e264ebe-f53f-4365-90ca-4e7bbe7bd885-kube-api-access-ql46k\") pod \"dns-operator-744455d44c-w8tz5\" (UID: \"5e264ebe-f53f-4365-90ca-4e7bbe7bd885\") " pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609122 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pznt2\" (UniqueName: \"kubernetes.io/projected/2915ee10-fa21-477c-8bae-13bae7d3ab0b-kube-api-access-pznt2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609138 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wnns\" (UniqueName: \"kubernetes.io/projected/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-kube-api-access-4wnns\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609157 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-certificates\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609172 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-mountpoint-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609197 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vncjf\" (UniqueName: \"kubernetes.io/projected/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-kube-api-access-vncjf\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609221 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfe0c02d-2802-4506-8adf-dcf3ff96e398-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609238 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede4cd45-4ef2-4302-8b01-db0e21c400a4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609257 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed520de6-ce85-4283-b919-ecb2b6158668-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kz692\" (UID: \"ed520de6-ce85-4283-b919-ecb2b6158668\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609276 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c6ws\" (UniqueName: \"kubernetes.io/projected/ed520de6-ce85-4283-b919-ecb2b6158668-kube-api-access-4c6ws\") pod \"multus-admission-controller-857f4d67dd-kz692\" (UID: \"ed520de6-ce85-4283-b919-ecb2b6158668\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609293 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74672\" (UniqueName: \"kubernetes.io/projected/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-kube-api-access-74672\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609310 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds47p\" (UniqueName: \"kubernetes.io/projected/453f2d30-8c6f-4626-9078-b34554f72d7b-kube-api-access-ds47p\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609326 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65lqx\" (UniqueName: \"kubernetes.io/projected/52101413-4b6f-4b34-bbfc-27d16b75b2a1-kube-api-access-65lqx\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609342 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d377938-8d0c-42fe-b1ad-5d105de349be-config\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609357 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fml\" (UniqueName: \"kubernetes.io/projected/8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8-kube-api-access-l2fml\") pod \"package-server-manager-789f6589d5-s6kzf\" (UID: \"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609374 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff9c6c4c-e626-48f7-ae69-c50da92766d5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609388 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5vh2\" (UniqueName: \"kubernetes.io/projected/f5bdf39f-15a1-4f08-b536-73387288994b-kube-api-access-v5vh2\") pod \"migrator-59844c95c7-vrdt7\" (UID: \"f5bdf39f-15a1-4f08-b536-73387288994b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609403 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-trusted-ca\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609418 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d377938-8d0c-42fe-b1ad-5d105de349be-serving-cert\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609436 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc716f95-70ff-431c-8234-d4f7bf46a08f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609452 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-default-certificate\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609465 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-plugins-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609489 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-metrics-certs\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609504 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-apiservice-cert\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609522 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-tls\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609537 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-service-ca-bundle\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609553 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/210a0892-bb13-4734-8783-13d4cb76fb4d-images\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609569 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-srv-cert\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609584 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede4cd45-4ef2-4302-8b01-db0e21c400a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609601 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4r6t\" (UniqueName: \"kubernetes.io/projected/210a0892-bb13-4734-8783-13d4cb76fb4d-kube-api-access-g4r6t\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609618 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2915ee10-fa21-477c-8bae-13bae7d3ab0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e264ebe-f53f-4365-90ca-4e7bbe7bd885-metrics-tls\") pod \"dns-operator-744455d44c-w8tz5\" (UID: \"5e264ebe-f53f-4365-90ca-4e7bbe7bd885\") " pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609665 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k295\" (UniqueName: \"kubernetes.io/projected/5ebd5c1d-3782-425e-8bdd-893779283f56-kube-api-access-8k295\") pod \"control-plane-machine-set-operator-78cbb6b69f-cz2m2\" (UID: \"5ebd5c1d-3782-425e-8bdd-893779283f56\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609681 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52101413-4b6f-4b34-bbfc-27d16b75b2a1-config-volume\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609695 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-csi-data-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609713 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfe3694-fc1a-4262-85ea-413fad222b35-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609730 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ebd5c1d-3782-425e-8bdd-893779283f56-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cz2m2\" (UID: \"5ebd5c1d-3782-425e-8bdd-893779283f56\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609746 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsggw\" (UniqueName: \"kubernetes.io/projected/9b43e189-43b7-4c00-a149-fee8236f2e22-kube-api-access-vsggw\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609769 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-registration-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609785 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/088d53b5-33b9-434a-83b5-2347e2dc38cc-cert\") pod \"ingress-canary-6s8qz\" (UID: \"088d53b5-33b9-434a-83b5-2347e2dc38cc\") " pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609799 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbls\" (UniqueName: \"kubernetes.io/projected/088d53b5-33b9-434a-83b5-2347e2dc38cc-kube-api-access-dkbls\") pod \"ingress-canary-6s8qz\" (UID: \"088d53b5-33b9-434a-83b5-2347e2dc38cc\") " pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609815 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/453f2d30-8c6f-4626-9078-b34554f72d7b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609830 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2915ee10-fa21-477c-8bae-13bae7d3ab0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.609846 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dead528-ae26-4834-8971-b119b113c807-config-volume\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610615 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc716f95-70ff-431c-8234-d4f7bf46a08f-config\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610638 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhq4\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-kube-api-access-6jhq4\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-socket-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610669 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s6kzf\" (UID: \"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610686 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e75ebcde-12f1-4748-b7e7-f416820ae61b-node-bootstrap-token\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610703 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/210a0892-bb13-4734-8783-13d4cb76fb4d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610718 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff9c6c4c-e626-48f7-ae69-c50da92766d5-metrics-tls\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610734 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dead528-ae26-4834-8971-b119b113c807-metrics-tls\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610748 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e75ebcde-12f1-4748-b7e7-f416820ae61b-certs\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610762 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-tmpfs\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610792 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dvr\" (UniqueName: \"kubernetes.io/projected/e75ebcde-12f1-4748-b7e7-f416820ae61b-kube-api-access-d6dvr\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc716f95-70ff-431c-8234-d4f7bf46a08f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610825 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/210a0892-bb13-4734-8783-13d4cb76fb4d-proxy-tls\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610846 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbns\" (UniqueName: \"kubernetes.io/projected/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-kube-api-access-vbbns\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610863 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede4cd45-4ef2-4302-8b01-db0e21c400a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610886 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610901 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bd97\" (UniqueName: \"kubernetes.io/projected/2dead528-ae26-4834-8971-b119b113c807-kube-api-access-2bd97\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610918 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610933 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-profile-collector-cert\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610957 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cfe0c02d-2802-4506-8adf-dcf3ff96e398-proxy-tls\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610972 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mh5g\" (UniqueName: \"kubernetes.io/projected/cfe0c02d-2802-4506-8adf-dcf3ff96e398-kube-api-access-5mh5g\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.610989 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xz6k\" (UniqueName: \"kubernetes.io/projected/4d377938-8d0c-42fe-b1ad-5d105de349be-kube-api-access-9xz6k\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.611005 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfe3694-fc1a-4262-85ea-413fad222b35-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.611020 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p87d\" (UniqueName: \"kubernetes.io/projected/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-kube-api-access-8p87d\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.611034 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-webhook-cert\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.611150 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.111136345 +0000 UTC m=+139.272408614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.612941 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/210a0892-bb13-4734-8783-13d4cb76fb4d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.619582 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede4cd45-4ef2-4302-8b01-db0e21c400a4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.620454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-trusted-ca\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.620761 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-certificates\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.624059 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-tls\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.624234 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/453f2d30-8c6f-4626-9078-b34554f72d7b-srv-cert\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.624299 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfe3694-fc1a-4262-85ea-413fad222b35-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.624806 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e264ebe-f53f-4365-90ca-4e7bbe7bd885-metrics-tls\") pod \"dns-operator-744455d44c-w8tz5\" (UID: \"5e264ebe-f53f-4365-90ca-4e7bbe7bd885\") " pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.624816 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/210a0892-bb13-4734-8783-13d4cb76fb4d-images\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.624952 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cfe0c02d-2802-4506-8adf-dcf3ff96e398-proxy-tls\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.625485 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2915ee10-fa21-477c-8bae-13bae7d3ab0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.626395 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede4cd45-4ef2-4302-8b01-db0e21c400a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.626688 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2915ee10-fa21-477c-8bae-13bae7d3ab0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.627144 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff9c6c4c-e626-48f7-ae69-c50da92766d5-trusted-ca\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.629618 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ebd5c1d-3782-425e-8bdd-893779283f56-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cz2m2\" (UID: \"5ebd5c1d-3782-425e-8bdd-893779283f56\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.629923 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/453f2d30-8c6f-4626-9078-b34554f72d7b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.632571 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfe0c02d-2802-4506-8adf-dcf3ff96e398-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.634621 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/210a0892-bb13-4734-8783-13d4cb76fb4d-proxy-tls\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.635962 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfe3694-fc1a-4262-85ea-413fad222b35-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.656603 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff9c6c4c-e626-48f7-ae69-c50da92766d5-metrics-tls\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.686233 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4r6t\" (UniqueName: \"kubernetes.io/projected/210a0892-bb13-4734-8783-13d4cb76fb4d-kube-api-access-g4r6t\") pod \"machine-config-operator-74547568cd-h7dcz\" (UID: \"210a0892-bb13-4734-8783-13d4cb76fb4d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712335 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dead528-ae26-4834-8971-b119b113c807-config-volume\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc716f95-70ff-431c-8234-d4f7bf46a08f-config\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712392 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-socket-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712412 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s6kzf\" (UID: \"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712442 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e75ebcde-12f1-4748-b7e7-f416820ae61b-node-bootstrap-token\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712463 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dead528-ae26-4834-8971-b119b113c807-metrics-tls\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712478 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e75ebcde-12f1-4748-b7e7-f416820ae61b-certs\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712497 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-tmpfs\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712526 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6dvr\" (UniqueName: \"kubernetes.io/projected/e75ebcde-12f1-4748-b7e7-f416820ae61b-kube-api-access-d6dvr\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712544 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc716f95-70ff-431c-8234-d4f7bf46a08f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712574 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbns\" (UniqueName: \"kubernetes.io/projected/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-kube-api-access-vbbns\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712600 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712618 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bd97\" (UniqueName: \"kubernetes.io/projected/2dead528-ae26-4834-8971-b119b113c807-kube-api-access-2bd97\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712634 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712653 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-profile-collector-cert\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712679 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xz6k\" (UniqueName: \"kubernetes.io/projected/4d377938-8d0c-42fe-b1ad-5d105de349be-kube-api-access-9xz6k\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712699 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-webhook-cert\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712718 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p87d\" (UniqueName: \"kubernetes.io/projected/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-kube-api-access-8p87d\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712739 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52101413-4b6f-4b34-bbfc-27d16b75b2a1-secret-volume\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712776 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-signing-key\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712794 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-stats-auth\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712819 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-signing-cabundle\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712841 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wnns\" (UniqueName: \"kubernetes.io/projected/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-kube-api-access-4wnns\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712871 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-mountpoint-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712893 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712911 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vncjf\" (UniqueName: \"kubernetes.io/projected/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-kube-api-access-vncjf\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712942 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed520de6-ce85-4283-b919-ecb2b6158668-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kz692\" (UID: \"ed520de6-ce85-4283-b919-ecb2b6158668\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712959 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c6ws\" (UniqueName: \"kubernetes.io/projected/ed520de6-ce85-4283-b919-ecb2b6158668-kube-api-access-4c6ws\") pod \"multus-admission-controller-857f4d67dd-kz692\" (UID: \"ed520de6-ce85-4283-b919-ecb2b6158668\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712979 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74672\" (UniqueName: \"kubernetes.io/projected/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-kube-api-access-74672\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.712996 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d377938-8d0c-42fe-b1ad-5d105de349be-config\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713013 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2fml\" (UniqueName: \"kubernetes.io/projected/8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8-kube-api-access-l2fml\") pod \"package-server-manager-789f6589d5-s6kzf\" (UID: \"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713049 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65lqx\" (UniqueName: \"kubernetes.io/projected/52101413-4b6f-4b34-bbfc-27d16b75b2a1-kube-api-access-65lqx\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713090 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d377938-8d0c-42fe-b1ad-5d105de349be-serving-cert\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713109 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc716f95-70ff-431c-8234-d4f7bf46a08f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713129 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-default-certificate\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713148 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-plugins-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713165 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-metrics-certs\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713184 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-apiservice-cert\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713202 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-service-ca-bundle\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713222 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-srv-cert\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713255 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-csi-data-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52101413-4b6f-4b34-bbfc-27d16b75b2a1-config-volume\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713293 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsggw\" (UniqueName: \"kubernetes.io/projected/9b43e189-43b7-4c00-a149-fee8236f2e22-kube-api-access-vsggw\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713328 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-registration-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713347 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/088d53b5-33b9-434a-83b5-2347e2dc38cc-cert\") pod \"ingress-canary-6s8qz\" (UID: \"088d53b5-33b9-434a-83b5-2347e2dc38cc\") " pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.713362 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbls\" (UniqueName: \"kubernetes.io/projected/088d53b5-33b9-434a-83b5-2347e2dc38cc-kube-api-access-dkbls\") pod \"ingress-canary-6s8qz\" (UID: \"088d53b5-33b9-434a-83b5-2347e2dc38cc\") " pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.714469 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dead528-ae26-4834-8971-b119b113c807-config-volume\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.719137 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-mountpoint-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.719583 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-socket-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.725996 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc716f95-70ff-431c-8234-d4f7bf46a08f-config\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.726104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-plugins-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.730816 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-metrics-certs\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.732286 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.232270181 +0000 UTC m=+139.393542440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.734148 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d377938-8d0c-42fe-b1ad-5d105de349be-config\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.735701 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.749594 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-profile-collector-cert\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.753565 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-webhook-cert\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.756612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52101413-4b6f-4b34-bbfc-27d16b75b2a1-secret-volume\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.757204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds47p\" (UniqueName: \"kubernetes.io/projected/453f2d30-8c6f-4626-9078-b34554f72d7b-kube-api-access-ds47p\") pod \"olm-operator-6b444d44fb-rnhhn\" (UID: \"453f2d30-8c6f-4626-9078-b34554f72d7b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.757594 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff9c6c4c-e626-48f7-ae69-c50da92766d5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.758198 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-default-certificate\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.760550 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d377938-8d0c-42fe-b1ad-5d105de349be-serving-cert\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.761701 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-signing-cabundle\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.761938 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.762564 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dead528-ae26-4834-8971-b119b113c807-metrics-tls\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.763050 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-tmpfs\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.763347 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-apiservice-cert\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.763540 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k295\" (UniqueName: \"kubernetes.io/projected/5ebd5c1d-3782-425e-8bdd-893779283f56-kube-api-access-8k295\") pod \"control-plane-machine-set-operator-78cbb6b69f-cz2m2\" (UID: \"5ebd5c1d-3782-425e-8bdd-893779283f56\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.763805 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed520de6-ce85-4283-b919-ecb2b6158668-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kz692\" (UID: \"ed520de6-ce85-4283-b919-ecb2b6158668\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.764767 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-service-ca-bundle\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.764844 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-csi-data-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.764888 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-registration-dir\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.767571 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e75ebcde-12f1-4748-b7e7-f416820ae61b-certs\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.768605 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-stats-auth\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.768972 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc716f95-70ff-431c-8234-d4f7bf46a08f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.770437 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52101413-4b6f-4b34-bbfc-27d16b75b2a1-config-volume\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.773211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s6kzf\" (UID: \"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.773913 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e75ebcde-12f1-4748-b7e7-f416820ae61b-node-bootstrap-token\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.774583 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-signing-key\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.776819 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5vh2\" (UniqueName: \"kubernetes.io/projected/f5bdf39f-15a1-4f08-b536-73387288994b-kube-api-access-v5vh2\") pod \"migrator-59844c95c7-vrdt7\" (UID: \"f5bdf39f-15a1-4f08-b536-73387288994b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.784397 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede4cd45-4ef2-4302-8b01-db0e21c400a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qsz44\" (UID: \"ede4cd45-4ef2-4302-8b01-db0e21c400a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.785928 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.790751 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-srv-cert\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.793092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhq4\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-kube-api-access-6jhq4\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.794410 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.799865 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/088d53b5-33b9-434a-83b5-2347e2dc38cc-cert\") pod \"ingress-canary-6s8qz\" (UID: \"088d53b5-33b9-434a-83b5-2347e2dc38cc\") " pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.814484 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.814674 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.314650423 +0000 UTC m=+139.475922692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.815117 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.815518 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.315507157 +0000 UTC m=+139.476779426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.842775 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mh5g\" (UniqueName: \"kubernetes.io/projected/cfe0c02d-2802-4506-8adf-dcf3ff96e398-kube-api-access-5mh5g\") pod \"machine-config-controller-84d6567774-wkb4v\" (UID: \"cfe0c02d-2802-4506-8adf-dcf3ff96e398\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.856680 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vnp\" (UniqueName: \"kubernetes.io/projected/ff9c6c4c-e626-48f7-ae69-c50da92766d5-kube-api-access-k2vnp\") pod \"ingress-operator-5b745b69d9-7n2zf\" (UID: \"ff9c6c4c-e626-48f7-ae69-c50da92766d5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.860627 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-bound-sa-token\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.870488 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fgknk"] Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.883140 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p"] Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.893571 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql46k\" (UniqueName: \"kubernetes.io/projected/5e264ebe-f53f-4365-90ca-4e7bbe7bd885-kube-api-access-ql46k\") pod \"dns-operator-744455d44c-w8tz5\" (UID: \"5e264ebe-f53f-4365-90ca-4e7bbe7bd885\") " pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.906501 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pznt2\" (UniqueName: \"kubernetes.io/projected/2915ee10-fa21-477c-8bae-13bae7d3ab0b-kube-api-access-pznt2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5sfq5\" (UID: \"2915ee10-fa21-477c-8bae-13bae7d3ab0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.921137 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.921239 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.42122375 +0000 UTC m=+139.582496019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.921543 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:43 crc kubenswrapper[4680]: E0126 16:07:43.921838 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.421829226 +0000 UTC m=+139.583101495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.930486 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk"] Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.945466 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbls\" (UniqueName: \"kubernetes.io/projected/088d53b5-33b9-434a-83b5-2347e2dc38cc-kube-api-access-dkbls\") pod \"ingress-canary-6s8qz\" (UID: \"088d53b5-33b9-434a-83b5-2347e2dc38cc\") " pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:43 crc kubenswrapper[4680]: I0126 16:07:43.975158 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vncjf\" (UniqueName: \"kubernetes.io/projected/59e9f7e1-0b14-4284-bc1d-ff302dc46dcd-kube-api-access-vncjf\") pod \"service-ca-9c57cc56f-lprxn\" (UID: \"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd\") " pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.003039 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2fml\" (UniqueName: \"kubernetes.io/projected/8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8-kube-api-access-l2fml\") pod \"package-server-manager-789f6589d5-s6kzf\" (UID: \"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.007040 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c6ws\" (UniqueName: \"kubernetes.io/projected/ed520de6-ce85-4283-b919-ecb2b6158668-kube-api-access-4c6ws\") pod \"multus-admission-controller-857f4d67dd-kz692\" (UID: \"ed520de6-ce85-4283-b919-ecb2b6158668\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.008488 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.019864 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.024756 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74672\" (UniqueName: \"kubernetes.io/projected/c5b7fdeb-1a25-4193-bf77-6645f6e8370a-kube-api-access-74672\") pod \"router-default-5444994796-9kzqd\" (UID: \"c5b7fdeb-1a25-4193-bf77-6645f6e8370a\") " pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.026251 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.026559 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm"] Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.026752 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.027083 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.527052256 +0000 UTC m=+139.688324525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.033850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6dvr\" (UniqueName: \"kubernetes.io/projected/e75ebcde-12f1-4748-b7e7-f416820ae61b-kube-api-access-d6dvr\") pod \"machine-config-server-6slhj\" (UID: \"e75ebcde-12f1-4748-b7e7-f416820ae61b\") " pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.038382 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.046315 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.054237 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.054805 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xz6k\" (UniqueName: \"kubernetes.io/projected/4d377938-8d0c-42fe-b1ad-5d105de349be-kube-api-access-9xz6k\") pod \"service-ca-operator-777779d784-46pbk\" (UID: \"4d377938-8d0c-42fe-b1ad-5d105de349be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.066443 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.073256 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p87d\" (UniqueName: \"kubernetes.io/projected/0bf062c4-962d-4bba-98d4-ee41fe1cc0b1-kube-api-access-8p87d\") pod \"catalog-operator-68c6474976-cg5d7\" (UID: \"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.091566 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.092939 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65lqx\" (UniqueName: \"kubernetes.io/projected/52101413-4b6f-4b34-bbfc-27d16b75b2a1-kube-api-access-65lqx\") pod \"collect-profiles-29490720-mr5ft\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.097590 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.106568 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.118376 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.126186 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc716f95-70ff-431c-8234-d4f7bf46a08f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k27bz\" (UID: \"fc716f95-70ff-431c-8234-d4f7bf46a08f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.126352 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.128687 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.128927 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.628915693 +0000 UTC m=+139.790187962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.134110 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.147898 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.150827 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbns\" (UniqueName: \"kubernetes.io/projected/1d1c4ec3-28a9-4741-844b-ea48d59f84d3-kube-api-access-vbbns\") pod \"packageserver-d55dfcdfc-vjf22\" (UID: \"1d1c4ec3-28a9-4741-844b-ea48d59f84d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.156414 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.165554 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6slhj" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.167798 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6s8qz" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.174602 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wnns\" (UniqueName: \"kubernetes.io/projected/c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c-kube-api-access-4wnns\") pod \"csi-hostpathplugin-w9dh6\" (UID: \"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c\") " pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.203983 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz"] Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.204930 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bd97\" (UniqueName: \"kubernetes.io/projected/2dead528-ae26-4834-8971-b119b113c807-kube-api-access-2bd97\") pod \"dns-default-hhs6d\" (UID: \"2dead528-ae26-4834-8971-b119b113c807\") " pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.214203 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsggw\" (UniqueName: \"kubernetes.io/projected/9b43e189-43b7-4c00-a149-fee8236f2e22-kube-api-access-vsggw\") pod \"marketplace-operator-79b997595-vvkm4\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.222265 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" event={"ID":"41f82b70-3d6b-4c74-9655-08e4552ee8b4","Type":"ContainerStarted","Data":"8edf0d601cc62675f15fa7b2f6c6324e228132915a70af9c867be37ee747b2ca"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.224255 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" event={"ID":"99923803-37e9-445f-bba0-d140e9123e83","Type":"ContainerStarted","Data":"c823c1ec7df6fc3299a12f12e4fbc4e7984f91a66a0d15761a79eb6ce9ad797e"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.228042 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z2kjp" event={"ID":"9f58b269-9b27-441e-bd05-b99b435c29c9","Type":"ContainerStarted","Data":"bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.230204 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.230390 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.730367189 +0000 UTC m=+139.891639448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.230739 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.231104 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.731085519 +0000 UTC m=+139.892357788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.233867 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" event={"ID":"84e58c16-df02-4857-aba5-434321c87141","Type":"ContainerStarted","Data":"55d234ea72af885d7adf006600c4496c2e52aeef06c2d3c43c87ae89f47b6a34"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.234475 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.238219 4680 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ndf74 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.238262 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" podUID="84e58c16-df02-4857-aba5-434321c87141" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.241032 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" event={"ID":"0e171291-ab8f-40f8-a0e3-2ddf10db9732","Type":"ContainerStarted","Data":"cd537acff6f6f7df489886e71d15f15782bea5c468277f519f5a1893d23322d1"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.242514 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" event={"ID":"fac02262-7a92-4d28-9b92-87da9e2ba68e","Type":"ContainerStarted","Data":"1019b433a01cdf2c9712bd564d7666a819eba641abbc35743dc3a7ee712e4fc2"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.245942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" event={"ID":"7a10c9f7-b077-4828-a066-d5c07b773069","Type":"ContainerStarted","Data":"657d7043b15e7e05244e9107c6fe41690d4a4451421dfa44dff8ee45bfb3eecf"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.251299 4680 generic.go:334] "Generic (PLEG): container finished" podID="e118dee6-50de-4dbc-bf0c-ddda27bd5da5" containerID="8128954e136584e79a9a00e66bc96ac4819deafa88060b0dfba00649b99e6700" exitCode=0 Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.251546 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" event={"ID":"e118dee6-50de-4dbc-bf0c-ddda27bd5da5","Type":"ContainerDied","Data":"8128954e136584e79a9a00e66bc96ac4819deafa88060b0dfba00649b99e6700"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.259637 4680 generic.go:334] "Generic (PLEG): container finished" podID="debda3fd-3a5a-4f25-b732-90eb3bade1d4" containerID="aed97f88e79f22d28deac875265bc1398a6f7ad13f66a8eca7c03a66fdd45175" exitCode=0 Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.259718 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" event={"ID":"debda3fd-3a5a-4f25-b732-90eb3bade1d4","Type":"ContainerDied","Data":"aed97f88e79f22d28deac875265bc1398a6f7ad13f66a8eca7c03a66fdd45175"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.260523 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fgknk" event={"ID":"643d9c97-4160-40e2-9f56-e200526e2a8b","Type":"ContainerStarted","Data":"285364e72457c5a5c2e198d989a86620120170fe147e1f05486964a566df5d08"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.261563 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" event={"ID":"7e3521bb-af73-49aa-ac76-8b7dcabcdeba","Type":"ContainerStarted","Data":"6dcbcb3f3087cf49bc9ab02a841fd60320300a5a78d4e641b796235567a7a294"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.263127 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" event={"ID":"36cf56f2-0fb2-4172-be72-a6c8097a2bf5","Type":"ContainerStarted","Data":"478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.266495 4680 generic.go:334] "Generic (PLEG): container finished" podID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerID="43bc7aff41378986d7fdf2752c3f9779fc10c1e3141b4982a7b2fc598c58fbb3" exitCode=0 Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.267368 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" event={"ID":"b97f4ad2-2288-4dd8-a30f-fb5c407855a3","Type":"ContainerDied","Data":"43bc7aff41378986d7fdf2752c3f9779fc10c1e3141b4982a7b2fc598c58fbb3"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.267439 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.267452 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" event={"ID":"b97f4ad2-2288-4dd8-a30f-fb5c407855a3","Type":"ContainerStarted","Data":"17a801c04d4b7461de21e6164a1c16bf3893870fe791f48d720bda63efbba26a"} Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.271802 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.279999 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.296656 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn"] Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.300657 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.331609 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.331790 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.831763483 +0000 UTC m=+139.993035752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.332282 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.339888 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.839874566 +0000 UTC m=+140.001146835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.382543 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.419351 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.434094 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.434211 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.934195166 +0000 UTC m=+140.095467435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.434440 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.434724 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:44.934715641 +0000 UTC m=+140.095987910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.440967 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:44 crc kubenswrapper[4680]: W0126 16:07:44.469517 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod210a0892_bb13_4734_8783_13d4cb76fb4d.slice/crio-b0b4e6e2910cc6aeea6d05f4bde18920a0b46e73b310b2732962fac164e0b5e7 WatchSource:0}: Error finding container b0b4e6e2910cc6aeea6d05f4bde18920a0b46e73b310b2732962fac164e0b5e7: Status 404 returned error can't find the container with id b0b4e6e2910cc6aeea6d05f4bde18920a0b46e73b310b2732962fac164e0b5e7 Jan 26 16:07:44 crc kubenswrapper[4680]: W0126 16:07:44.474810 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod453f2d30_8c6f_4626_9078_b34554f72d7b.slice/crio-c72c595e4489e28d83764ae67e3812890d8526ff8b280e369449419b720ceb28 WatchSource:0}: Error finding container c72c595e4489e28d83764ae67e3812890d8526ff8b280e369449419b720ceb28: Status 404 returned error can't find the container with id c72c595e4489e28d83764ae67e3812890d8526ff8b280e369449419b720ceb28 Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.538785 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.538924 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.038902862 +0000 UTC m=+140.200175131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.539318 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.540020 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.040004222 +0000 UTC m=+140.201276491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.548373 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.642697 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.643078 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.143035421 +0000 UTC m=+140.304307690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.643446 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.643777 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.143767801 +0000 UTC m=+140.305040070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.747976 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.748241 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.248228 +0000 UTC m=+140.409500269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.788323 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-w5w9r" podStartSLOduration=118.78829603 podStartE2EDuration="1m58.78829603s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:44.747247633 +0000 UTC m=+139.908519902" watchObservedRunningTime="2026-01-26 16:07:44.78829603 +0000 UTC m=+139.949568299" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.853269 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mlldw" podStartSLOduration=119.853252274 podStartE2EDuration="1m59.853252274s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:44.851221558 +0000 UTC m=+140.012493827" watchObservedRunningTime="2026-01-26 16:07:44.853252274 +0000 UTC m=+140.014524543" Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.856789 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.857094 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.357081949 +0000 UTC m=+140.518354218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:44 crc kubenswrapper[4680]: I0126 16:07:44.959438 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:44 crc kubenswrapper[4680]: E0126 16:07:44.959769 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.459756399 +0000 UTC m=+140.621028668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.000956 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" podStartSLOduration=119.00093688 podStartE2EDuration="1m59.00093688s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:44.999689755 +0000 UTC m=+140.160962034" watchObservedRunningTime="2026-01-26 16:07:45.00093688 +0000 UTC m=+140.162209149" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.005770 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w8tz5"] Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.036900 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" podStartSLOduration=120.036884797 podStartE2EDuration="2m0.036884797s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.034958814 +0000 UTC m=+140.196231083" watchObservedRunningTime="2026-01-26 16:07:45.036884797 +0000 UTC m=+140.198157066" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.062666 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.063009 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.562995924 +0000 UTC m=+140.724268183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.076026 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" podStartSLOduration=120.076012921 podStartE2EDuration="2m0.076012921s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.075023574 +0000 UTC m=+140.236295843" watchObservedRunningTime="2026-01-26 16:07:45.076012921 +0000 UTC m=+140.237285190" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.174416 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fbs7f" podStartSLOduration=119.174386113 podStartE2EDuration="1m59.174386113s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.112773311 +0000 UTC m=+140.274045590" watchObservedRunningTime="2026-01-26 16:07:45.174386113 +0000 UTC m=+140.335658382" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.175808 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.176157 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.676142221 +0000 UTC m=+140.837414490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: W0126 16:07:45.233771 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e264ebe_f53f_4365_90ca_4e7bbe7bd885.slice/crio-6079a2d21be324252649f2ac7261febf179c990b6f98ee0606144c6ad0d2e847 WatchSource:0}: Error finding container 6079a2d21be324252649f2ac7261febf179c990b6f98ee0606144c6ad0d2e847: Status 404 returned error can't find the container with id 6079a2d21be324252649f2ac7261febf179c990b6f98ee0606144c6ad0d2e847 Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.298976 4680 csr.go:261] certificate signing request csr-vm2qg is approved, waiting to be issued Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.313834 4680 csr.go:257] certificate signing request csr-vm2qg is issued Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.325737 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.326141 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.826120489 +0000 UTC m=+140.987392758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.360298 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6slhj" event={"ID":"e75ebcde-12f1-4748-b7e7-f416820ae61b","Type":"ContainerStarted","Data":"bdaf3a2c99d0de399cb85b2cab1a589f1c0e4165868dc6b3212d2910d515fb09"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.400513 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7"] Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.401386 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" event={"ID":"5e264ebe-f53f-4365-90ca-4e7bbe7bd885","Type":"ContainerStarted","Data":"6079a2d21be324252649f2ac7261febf179c990b6f98ee0606144c6ad0d2e847"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.411584 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" event={"ID":"99923803-37e9-445f-bba0-d140e9123e83","Type":"ContainerStarted","Data":"e6548c9bcde60cf0f8c241ab1bf9cb017fcc45e83add7ce9530e9b7be136f0db"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.413477 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" event={"ID":"9e3eb885-ee39-406a-a004-27cab65b02f8","Type":"ContainerStarted","Data":"c8ca056724ddc5ee07beb63f1f5d580c9d0844e525c7e14a867aadc17abaa7ff"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.452923 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" event={"ID":"453f2d30-8c6f-4626-9078-b34554f72d7b","Type":"ContainerStarted","Data":"c72c595e4489e28d83764ae67e3812890d8526ff8b280e369449419b720ceb28"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.452973 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.453017 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9m5h9" podStartSLOduration=119.452997144 podStartE2EDuration="1m59.452997144s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.400245205 +0000 UTC m=+140.561517474" watchObservedRunningTime="2026-01-26 16:07:45.452997144 +0000 UTC m=+140.614269413" Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.453054 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.953025244 +0000 UTC m=+141.114297513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.456001 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.457016 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7"] Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.461565 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:45.961549298 +0000 UTC m=+141.122821567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.489030 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9kzqd" event={"ID":"c5b7fdeb-1a25-4193-bf77-6645f6e8370a","Type":"ContainerStarted","Data":"2c2db8a0399b43811670e1d00d04e14bdfbaaaa8ebe37c6055a31daa54bd321a"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.504856 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8bdz6" podStartSLOduration=121.504839177 podStartE2EDuration="2m1.504839177s" podCreationTimestamp="2026-01-26 16:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.456774937 +0000 UTC m=+140.618047206" watchObservedRunningTime="2026-01-26 16:07:45.504839177 +0000 UTC m=+140.666111446" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.517728 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44"] Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.570319 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.571321 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.071308132 +0000 UTC m=+141.232580401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.590160 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" event={"ID":"210a0892-bb13-4734-8783-13d4cb76fb4d","Type":"ContainerStarted","Data":"b0b4e6e2910cc6aeea6d05f4bde18920a0b46e73b310b2732962fac164e0b5e7"} Jan 26 16:07:45 crc kubenswrapper[4680]: W0126 16:07:45.632723 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf062c4_962d_4bba_98d4_ee41fe1cc0b1.slice/crio-dd0a71c21672d9259f1b7bfc77d7353ff130916e346d3a35dcdd635bee538c49 WatchSource:0}: Error finding container dd0a71c21672d9259f1b7bfc77d7353ff130916e346d3a35dcdd635bee538c49: Status 404 returned error can't find the container with id dd0a71c21672d9259f1b7bfc77d7353ff130916e346d3a35dcdd635bee538c49 Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.670461 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fgknk" event={"ID":"643d9c97-4160-40e2-9f56-e200526e2a8b","Type":"ContainerStarted","Data":"ae2d13eef501aff144677e21c44efdcec7e3c157a17edcf03121b9fd4bb475ac"} Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.671279 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.671530 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.171519434 +0000 UTC m=+141.332791703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.671999 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.681420 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.681475 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.772833 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-z2kjp" podStartSLOduration=120.772816196 podStartE2EDuration="2m0.772816196s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.695879073 +0000 UTC m=+140.857151342" watchObservedRunningTime="2026-01-26 16:07:45.772816196 +0000 UTC m=+140.934088465" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.786811 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.787621 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.287606222 +0000 UTC m=+141.448878481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.796375 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podStartSLOduration=120.796348502 podStartE2EDuration="2m0.796348502s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:45.775264373 +0000 UTC m=+140.936536642" watchObservedRunningTime="2026-01-26 16:07:45.796348502 +0000 UTC m=+140.957620771" Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.797988 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf"] Jan 26 16:07:45 crc kubenswrapper[4680]: I0126 16:07:45.891285 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:45 crc kubenswrapper[4680]: E0126 16:07:45.891576 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.391563817 +0000 UTC m=+141.552836086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.004623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.011933 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.511912532 +0000 UTC m=+141.673184801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.025565 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.025845 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.525832294 +0000 UTC m=+141.687104563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.131635 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.132034 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.63202013 +0000 UTC m=+141.793292389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.259778 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.260413 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.760401056 +0000 UTC m=+141.921673315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.318175 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 16:02:45 +0000 UTC, rotation deadline is 2026-12-20 07:56:20.839110014 +0000 UTC Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.318210 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7863h48m34.520902201s for next certificate rotation Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.361708 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.362152 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.862131779 +0000 UTC m=+142.023404048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.385913 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft"] Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.425786 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" podStartSLOduration=120.425751356 podStartE2EDuration="2m0.425751356s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:46.385795589 +0000 UTC m=+141.547067858" watchObservedRunningTime="2026-01-26 16:07:46.425751356 +0000 UTC m=+141.587023625" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.463527 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.463818 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:46.963804861 +0000 UTC m=+142.125077130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.565648 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.565805 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.065779482 +0000 UTC m=+142.227051751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.566178 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.566465 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.06645305 +0000 UTC m=+142.227725319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.670879 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.671213 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.171197726 +0000 UTC m=+142.332469995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.671248 4680 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ndf74 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.671271 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" podUID="84e58c16-df02-4857-aba5-434321c87141" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.671660 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" podStartSLOduration=121.671641669 podStartE2EDuration="2m1.671641669s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:46.627496246 +0000 UTC m=+141.788768515" watchObservedRunningTime="2026-01-26 16:07:46.671641669 +0000 UTC m=+141.832913938" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.672102 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fgknk" podStartSLOduration=121.672096921 podStartE2EDuration="2m1.672096921s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:46.670383344 +0000 UTC m=+141.831655603" watchObservedRunningTime="2026-01-26 16:07:46.672096921 +0000 UTC m=+141.833369190" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.694016 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" event={"ID":"453f2d30-8c6f-4626-9078-b34554f72d7b","Type":"ContainerStarted","Data":"c101a7bb5f9ab3d4e3529acfd0009ed52a13264d59b4dfc94abb96cd64c3e891"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.695655 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.696995 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.697051 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.708508 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9kzqd" event={"ID":"c5b7fdeb-1a25-4193-bf77-6645f6e8370a","Type":"ContainerStarted","Data":"d9b73696c3b7a9170fcb60e40f035b6612a34b553656cc1e94dc4e3ee09fb830"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.728192 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" event={"ID":"210a0892-bb13-4734-8783-13d4cb76fb4d","Type":"ContainerStarted","Data":"1ef2d0f8070228ab6e3b6a94ef9a88f43db73c5f348ec99138c28f9ec17d2e8c"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.730138 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" event={"ID":"52101413-4b6f-4b34-bbfc-27d16b75b2a1","Type":"ContainerStarted","Data":"9eba4e56574049e14de0447d15e277642f247a17113920ea943e0f83c54f37f6"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.749452 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6slhj" event={"ID":"e75ebcde-12f1-4748-b7e7-f416820ae61b","Type":"ContainerStarted","Data":"a2ca6cde160976c49e781d5cfa12efb8eff62e7d68f12a33b185281d0beec8b2"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.759383 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" event={"ID":"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1","Type":"ContainerStarted","Data":"dd0a71c21672d9259f1b7bfc77d7353ff130916e346d3a35dcdd635bee538c49"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.773984 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" event={"ID":"9e3eb885-ee39-406a-a004-27cab65b02f8","Type":"ContainerStarted","Data":"3d4fab1fc70ce8a4a83cacd652ad10400f96e6a97daf9a1a3cdb07be53faacfe"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.774618 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.775970 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.275952563 +0000 UTC m=+142.437224902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.784649 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" event={"ID":"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8","Type":"ContainerStarted","Data":"96f63fc18be01bc3ddfec14bc4bb6da2d28564e6a64fdef5d7a279b913552033"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.791287 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" event={"ID":"ede4cd45-4ef2-4302-8b01-db0e21c400a4","Type":"ContainerStarted","Data":"e231d66b90feec6b83d2d34a72b09827ee05d1d2d201d552d859a3778f26bd72"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.791393 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nw2gk" podStartSLOduration=120.791299044 podStartE2EDuration="2m0.791299044s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:46.782388299 +0000 UTC m=+141.943660558" watchObservedRunningTime="2026-01-26 16:07:46.791299044 +0000 UTC m=+141.952571313" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.805228 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" event={"ID":"f5bdf39f-15a1-4f08-b536-73387288994b","Type":"ContainerStarted","Data":"8fb7868403a0fb6f7e84ea713f09464d9752f95525734fc3171063d9e253dcb6"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.805270 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" event={"ID":"f5bdf39f-15a1-4f08-b536-73387288994b","Type":"ContainerStarted","Data":"b0e71fdfd3dd07137aa54cabcd6b28016afb64d084145281918b23415e024822"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.819324 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" event={"ID":"41f82b70-3d6b-4c74-9655-08e4552ee8b4","Type":"ContainerStarted","Data":"9e091b82148c9bde3b27be451a32e8c089d4941d4b2f5db244304f26285b74e5"} Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.820338 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.820378 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.875614 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.885474 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.385449159 +0000 UTC m=+142.546721428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.956896 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9kzqd" podStartSLOduration=120.956879931 podStartE2EDuration="2m0.956879931s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:46.954355101 +0000 UTC m=+142.115627360" watchObservedRunningTime="2026-01-26 16:07:46.956879931 +0000 UTC m=+142.118152200" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.975430 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 16:07:46 crc kubenswrapper[4680]: [+]log ok Jan 26 16:07:46 crc kubenswrapper[4680]: [-]poststarthook/max-in-flight-filter failed: reason withheld Jan 26 16:07:46 crc kubenswrapper[4680]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Jan 26 16:07:46 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.989732 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.983282 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:46 crc kubenswrapper[4680]: E0126 16:07:46.983578 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.483560383 +0000 UTC m=+142.644832652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.984150 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:07:46 crc kubenswrapper[4680]: I0126 16:07:46.990250 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.068650 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp2cm" podStartSLOduration=121.068634939 podStartE2EDuration="2m1.068634939s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:46.997587588 +0000 UTC m=+142.158859857" watchObservedRunningTime="2026-01-26 16:07:47.068634939 +0000 UTC m=+142.229907208" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.091674 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.092052 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.592037252 +0000 UTC m=+142.753309521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.119834 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.121254 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6slhj" podStartSLOduration=6.121242874 podStartE2EDuration="6.121242874s" podCreationTimestamp="2026-01-26 16:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:47.119747623 +0000 UTC m=+142.281019892" watchObservedRunningTime="2026-01-26 16:07:47.121242874 +0000 UTC m=+142.282515143" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.122778 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podStartSLOduration=121.122773716 podStartE2EDuration="2m1.122773716s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:47.08323744 +0000 UTC m=+142.244509709" watchObservedRunningTime="2026-01-26 16:07:47.122773716 +0000 UTC m=+142.284045985" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.137118 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:47 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:47 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:47 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.137199 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.194807 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.195123 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.695110953 +0000 UTC m=+142.856383222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.264286 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.302442 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.302870 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.802855691 +0000 UTC m=+142.964127960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: W0126 16:07:47.307259 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfe0c02d_2802_4506_8adf_dcf3ff96e398.slice/crio-45ce51c000864afa6b82c76d6a9937a20e89880b2b855ac71dae554ba519e4fe WatchSource:0}: Error finding container 45ce51c000864afa6b82c76d6a9937a20e89880b2b855ac71dae554ba519e4fe: Status 404 returned error can't find the container with id 45ce51c000864afa6b82c76d6a9937a20e89880b2b855ac71dae554ba519e4fe Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.404826 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.405319 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:47.905306755 +0000 UTC m=+143.066579024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.412327 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kz692"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.416877 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:07:47 crc kubenswrapper[4680]: W0126 16:07:47.460475 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded520de6_ce85_4283_b919_ecb2b6158668.slice/crio-b99ff23ac2da34c825c090a8ab2129a784e8b736177e35e64164f0d4a7bf6073 WatchSource:0}: Error finding container b99ff23ac2da34c825c090a8ab2129a784e8b736177e35e64164f0d4a7bf6073: Status 404 returned error can't find the container with id b99ff23ac2da34c825c090a8ab2129a784e8b736177e35e64164f0d4a7bf6073 Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.517887 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.518229 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.018212155 +0000 UTC m=+143.179484424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.619410 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.619750 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.119739043 +0000 UTC m=+143.281011312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.668618 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hhs6d"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.671531 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.721037 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.721698 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.221680823 +0000 UTC m=+143.382953092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.797564 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.825224 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.825505 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.325492843 +0000 UTC m=+143.486765112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.877538 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.879341 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2"] Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.895517 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hhs6d" event={"ID":"2dead528-ae26-4834-8971-b119b113c807","Type":"ContainerStarted","Data":"fe2aa7dcc5d8c902011d3ca164d222bc135e24b79eccfe02c2ea1b079552a1c2"} Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.913244 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" event={"ID":"e118dee6-50de-4dbc-bf0c-ddda27bd5da5","Type":"ContainerStarted","Data":"7cebabd5185a491085eceb7e21a69a44da4fc85f54f436dfc6c8bc6ae1a74aa5"} Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.926766 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:47 crc kubenswrapper[4680]: E0126 16:07:47.927100 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.427086933 +0000 UTC m=+143.588359202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.938181 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" event={"ID":"1d1c4ec3-28a9-4741-844b-ea48d59f84d3","Type":"ContainerStarted","Data":"f2cbd710a23d3ec779531e98cf3c110d656ebc026bab4d1459a791c341adc40b"} Jan 26 16:07:47 crc kubenswrapper[4680]: W0126 16:07:47.945052 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9c6c4c_e626_48f7_ae69_c50da92766d5.slice/crio-5703ef90103a5e92602c774180e595adc93295fab34a938225d987dbb8f2a826 WatchSource:0}: Error finding container 5703ef90103a5e92602c774180e595adc93295fab34a938225d987dbb8f2a826: Status 404 returned error can't find the container with id 5703ef90103a5e92602c774180e595adc93295fab34a938225d987dbb8f2a826 Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.945560 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" event={"ID":"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8","Type":"ContainerStarted","Data":"cfcab6eaf3fd208299ef94f6ab797fc61084ac543c0ee9384a6624d06315dc69"} Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.945590 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" event={"ID":"8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8","Type":"ContainerStarted","Data":"7092b0367a922ccbba60f59f262dda70cbef9621d661497b2d4e736688cfacc5"} Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.946224 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:07:47 crc kubenswrapper[4680]: I0126 16:07:47.985248 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" event={"ID":"debda3fd-3a5a-4f25-b732-90eb3bade1d4","Type":"ContainerStarted","Data":"4d126b0155dc0d13a09642a06f1b8f81d529cf0b39f9b7f63c4a5a796d9e8721"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.027872 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.028908 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.528896379 +0000 UTC m=+143.690168638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.054331 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" podStartSLOduration=122.054293437 podStartE2EDuration="2m2.054293437s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:47.944604114 +0000 UTC m=+143.105876383" watchObservedRunningTime="2026-01-26 16:07:48.054293437 +0000 UTC m=+143.215565706" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.056024 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6s8qz"] Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.056755 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" podStartSLOduration=122.056743794 podStartE2EDuration="2m2.056743794s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.052834986 +0000 UTC m=+143.214107255" watchObservedRunningTime="2026-01-26 16:07:48.056743794 +0000 UTC m=+143.218016063" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.076837 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" event={"ID":"ede4cd45-4ef2-4302-8b01-db0e21c400a4","Type":"ContainerStarted","Data":"61649a750658bf72acc05cd218fb149641a9885180b883b953bf3b5c165afd8c"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.100568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" event={"ID":"52101413-4b6f-4b34-bbfc-27d16b75b2a1","Type":"ContainerStarted","Data":"94100a6452451499c0321432e02101e0a48076e99dd9b1498a45c4952ba7e9a0"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.122173 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:48 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:48 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:48 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.122219 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.134094 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.134472 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.634457178 +0000 UTC m=+143.795729447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.176869 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.232758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" event={"ID":"f5bdf39f-15a1-4f08-b536-73387288994b","Type":"ContainerStarted","Data":"73d80a9de192f8f101c75af3e9c4b109671048999879e1bd8703ba0fe648fb0b"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.240002 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.265399 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.765381363 +0000 UTC m=+143.926653632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.268570 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qsz44" podStartSLOduration=122.26855522 podStartE2EDuration="2m2.26855522s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.210114026 +0000 UTC m=+143.371386295" watchObservedRunningTime="2026-01-26 16:07:48.26855522 +0000 UTC m=+143.429827479" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.276412 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" event={"ID":"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1","Type":"ContainerStarted","Data":"1a89dde78c90645addde8ab435699d67b96da71aa7b50a3acdb01013dcaa1be6"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.277564 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.283390 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.283460 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.314522 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" event={"ID":"9e3eb885-ee39-406a-a004-27cab65b02f8","Type":"ContainerStarted","Data":"689d07192d7d80c55e42b0083760fee7ccf3f19bd472d36e352c34503579f822"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.338863 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" event={"ID":"210a0892-bb13-4734-8783-13d4cb76fb4d","Type":"ContainerStarted","Data":"f859e4b86c66a69d91853ca1fee8c8e6b76cdd18a5f2c0ff6e137a7c9e23a009"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.340515 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.341090 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.841050081 +0000 UTC m=+144.002322350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.382789 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" podStartSLOduration=123.382769567 podStartE2EDuration="2m3.382769567s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.369813331 +0000 UTC m=+143.531085600" watchObservedRunningTime="2026-01-26 16:07:48.382769567 +0000 UTC m=+143.544041836" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.390631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" event={"ID":"cfe0c02d-2802-4506-8adf-dcf3ff96e398","Type":"ContainerStarted","Data":"89b0d1313d2c6b248b711629f830ff382b8bffea3b3d474be7c11bb53ad93d6c"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.390696 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" event={"ID":"cfe0c02d-2802-4506-8adf-dcf3ff96e398","Type":"ContainerStarted","Data":"45ce51c000864afa6b82c76d6a9937a20e89880b2b855ac71dae554ba519e4fe"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.391965 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz"] Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.422807 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" event={"ID":"5e264ebe-f53f-4365-90ca-4e7bbe7bd885","Type":"ContainerStarted","Data":"ab7834c375588669acae9ae6d965c2507d66d14db19b5e51fcd50d8e7d44c254"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.435594 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" event={"ID":"ed520de6-ce85-4283-b919-ecb2b6158668","Type":"ContainerStarted","Data":"b99ff23ac2da34c825c090a8ab2129a784e8b736177e35e64164f0d4a7bf6073"} Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.441611 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.442880 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:48.942868297 +0000 UTC m=+144.104140566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.445482 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.445532 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.526216 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrdt7" podStartSLOduration=122.526200475 podStartE2EDuration="2m2.526200475s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.482435904 +0000 UTC m=+143.643708173" watchObservedRunningTime="2026-01-26 16:07:48.526200475 +0000 UTC m=+143.687472744" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.527363 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vvkm4"] Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.542341 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.543144 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.0431309 +0000 UTC m=+144.204403159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.587941 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.619197 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podStartSLOduration=122.619180819 podStartE2EDuration="2m2.619180819s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.619052585 +0000 UTC m=+143.780324854" watchObservedRunningTime="2026-01-26 16:07:48.619180819 +0000 UTC m=+143.780453088" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.620144 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9496p" podStartSLOduration=123.620136755 podStartE2EDuration="2m3.620136755s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.556283502 +0000 UTC m=+143.717555771" watchObservedRunningTime="2026-01-26 16:07:48.620136755 +0000 UTC m=+143.781409034" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.644731 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.645105 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.14509454 +0000 UTC m=+144.306366809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.736311 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h7dcz" podStartSLOduration=122.736294865 podStartE2EDuration="2m2.736294865s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.735425331 +0000 UTC m=+143.896697600" watchObservedRunningTime="2026-01-26 16:07:48.736294865 +0000 UTC m=+143.897567134" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.740390 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-46pbk"] Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.747471 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.747819 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.247805911 +0000 UTC m=+144.409078180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.761234 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lprxn"] Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.778794 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" podStartSLOduration=122.778775001 podStartE2EDuration="2m2.778775001s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.776950761 +0000 UTC m=+143.938223030" watchObservedRunningTime="2026-01-26 16:07:48.778775001 +0000 UTC m=+143.940047270" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.820394 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9dh6"] Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.849668 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.850052 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.350040658 +0000 UTC m=+144.511312927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.898478 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" podStartSLOduration=122.898459228 podStartE2EDuration="2m2.898459228s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:48.897294326 +0000 UTC m=+144.058566595" watchObservedRunningTime="2026-01-26 16:07:48.898459228 +0000 UTC m=+144.059731497" Jan 26 16:07:48 crc kubenswrapper[4680]: I0126 16:07:48.950512 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:48 crc kubenswrapper[4680]: E0126 16:07:48.950904 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.450890198 +0000 UTC m=+144.612162467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.052880 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.053187 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.553176337 +0000 UTC m=+144.714448606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.129731 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:49 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:49 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:49 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.130026 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.153943 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.154300 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.654285993 +0000 UTC m=+144.815558262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.255014 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.255569 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.755550864 +0000 UTC m=+144.916823133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.358460 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.359048 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.859034506 +0000 UTC m=+145.020306775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.460311 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.460917 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:49.960904363 +0000 UTC m=+145.122176632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.494154 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6s8qz" event={"ID":"088d53b5-33b9-434a-83b5-2347e2dc38cc","Type":"ContainerStarted","Data":"84d1e91d4b203f3bec58233998b60ea2a0d7965eeec783379341df5008df2d61"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.494197 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6s8qz" event={"ID":"088d53b5-33b9-434a-83b5-2347e2dc38cc","Type":"ContainerStarted","Data":"e110d831bb30a1ba8f30f0d775fe8cb8894b0d812354e89db92dddabce55bd86"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.496024 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" event={"ID":"2915ee10-fa21-477c-8bae-13bae7d3ab0b","Type":"ContainerStarted","Data":"a721b8586b1ad318b17d5ba81b0115143e61c5242ba33afe6849e7b0c7670953"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.496054 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" event={"ID":"2915ee10-fa21-477c-8bae-13bae7d3ab0b","Type":"ContainerStarted","Data":"7636ac3b0cd486b96d1a3daa2e7955c46f1a14f7368c3f4524b1b0da299887e6"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.529260 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" event={"ID":"9b43e189-43b7-4c00-a149-fee8236f2e22","Type":"ContainerStarted","Data":"512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.529300 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" event={"ID":"9b43e189-43b7-4c00-a149-fee8236f2e22","Type":"ContainerStarted","Data":"4c728fcdfe808e1a57007e39ad2b542007a74f571ed7ff8fbcd0fd1fe83c3fbd"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.529470 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.530442 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vvkm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.530595 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.537705 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" event={"ID":"5ebd5c1d-3782-425e-8bdd-893779283f56","Type":"ContainerStarted","Data":"5188631a24744ae9cfa21dc1e87a368658454575951b8918beb9865e95c04305"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.537744 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" event={"ID":"5ebd5c1d-3782-425e-8bdd-893779283f56","Type":"ContainerStarted","Data":"12e415f0fe4b6fdfc00b35c04ccba20c72b545797d14ed3563a15381e3048bef"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.542172 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6s8qz" podStartSLOduration=8.542159204 podStartE2EDuration="8.542159204s" podCreationTimestamp="2026-01-26 16:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.540635403 +0000 UTC m=+144.701907672" watchObservedRunningTime="2026-01-26 16:07:49.542159204 +0000 UTC m=+144.703431463" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.551251 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" event={"ID":"ff9c6c4c-e626-48f7-ae69-c50da92766d5","Type":"ContainerStarted","Data":"2ff3fa02426bc494932314b7aa059bef7b975680355a69faf056d47e47fcab31"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.551292 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" event={"ID":"ff9c6c4c-e626-48f7-ae69-c50da92766d5","Type":"ContainerStarted","Data":"1085c44042f61bab52481bccd89313cde92304b64773c54ce9386a9670918edc"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.551302 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" event={"ID":"ff9c6c4c-e626-48f7-ae69-c50da92766d5","Type":"ContainerStarted","Data":"5703ef90103a5e92602c774180e595adc93295fab34a938225d987dbb8f2a826"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.562892 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.564144 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.064128848 +0000 UTC m=+145.225401117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.574290 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wkb4v" event={"ID":"cfe0c02d-2802-4506-8adf-dcf3ff96e398","Type":"ContainerStarted","Data":"b019bedca84b70885b774ff7880748c1f8a875b40aa80461a50c660be147de1b"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.582483 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cz2m2" podStartSLOduration=123.582468401 podStartE2EDuration="2m3.582468401s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.581467434 +0000 UTC m=+144.742739703" watchObservedRunningTime="2026-01-26 16:07:49.582468401 +0000 UTC m=+144.743740660" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.591474 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hhs6d" event={"ID":"2dead528-ae26-4834-8971-b119b113c807","Type":"ContainerStarted","Data":"a58aab375f4f532426c54592c55d44af2eb4f0718f922b4ee640ec6a4eea46ad"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.625099 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" event={"ID":"ed520de6-ce85-4283-b919-ecb2b6158668","Type":"ContainerStarted","Data":"1dc9883528af704bd0e4f031e1c4c03016fdcac757b90030c327ec3be845f27e"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.627679 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5sfq5" podStartSLOduration=123.627666962 podStartE2EDuration="2m3.627666962s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.627034165 +0000 UTC m=+144.788306434" watchObservedRunningTime="2026-01-26 16:07:49.627666962 +0000 UTC m=+144.788939231" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.629475 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" event={"ID":"debda3fd-3a5a-4f25-b732-90eb3bade1d4","Type":"ContainerStarted","Data":"4258be40cb23ae0a0d4ee26071e32d9b36870e5825fcc8666b2b4021cf98e416"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.630995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" event={"ID":"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd","Type":"ContainerStarted","Data":"38eb170b2b80e15b27316c0efd24108332dab2e2f4a65b480cfd55fd7742806c"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.631020 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" event={"ID":"59e9f7e1-0b14-4284-bc1d-ff302dc46dcd","Type":"ContainerStarted","Data":"0302bfe3cc5f1c306a4a0f6b2a79e95c913b69aaba84bf57de0c2f2d7e8ed17a"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.632119 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"91562f276d79818c0b67058783827207bc292c6017d8a2b8c317d3d479d23d54"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.633460 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" event={"ID":"4d377938-8d0c-42fe-b1ad-5d105de349be","Type":"ContainerStarted","Data":"cc45056c7b1abf28e730e34280a7d5fd9307f322e40dbac677fd4a45e78b7e57"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.664604 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.665810 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.16579816 +0000 UTC m=+145.327070429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.669605 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" podStartSLOduration=123.669590074 podStartE2EDuration="2m3.669590074s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.663957969 +0000 UTC m=+144.825230238" watchObservedRunningTime="2026-01-26 16:07:49.669590074 +0000 UTC m=+144.830862343" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.675386 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w8tz5" event={"ID":"5e264ebe-f53f-4365-90ca-4e7bbe7bd885","Type":"ContainerStarted","Data":"6cb1fc00dcf749047fc64e049789b6c94b191bf3b3a15be5d24e782c0c0800e0"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.706652 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" event={"ID":"fc716f95-70ff-431c-8234-d4f7bf46a08f","Type":"ContainerStarted","Data":"543121561c291f4ff546d0f4bdc95f03f491eb27baf7d4f4dc1e2d476139d475"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.726653 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" podStartSLOduration=123.72663617 podStartE2EDuration="2m3.72663617s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.725455628 +0000 UTC m=+144.886727897" watchObservedRunningTime="2026-01-26 16:07:49.72663617 +0000 UTC m=+144.887908439" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.727714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" event={"ID":"1d1c4ec3-28a9-4741-844b-ea48d59f84d3","Type":"ContainerStarted","Data":"d2450d7e7653000ecc62443cb5bbd7b899fd2f2ad47379916b6cc8493fc8894c"} Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.727751 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.746973 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.791574 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.791840 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.29179172 +0000 UTC m=+145.453063999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.792192 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.794957 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.294941856 +0000 UTC m=+145.456214125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.832915 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" podStartSLOduration=123.832898538 podStartE2EDuration="2m3.832898538s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.831542361 +0000 UTC m=+144.992814630" watchObservedRunningTime="2026-01-26 16:07:49.832898538 +0000 UTC m=+144.994170807" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.833249 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" podStartSLOduration=124.833244718 podStartE2EDuration="2m4.833244718s" podCreationTimestamp="2026-01-26 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.791498941 +0000 UTC m=+144.952771210" watchObservedRunningTime="2026-01-26 16:07:49.833244718 +0000 UTC m=+144.994516987" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.865466 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7n2zf" podStartSLOduration=123.865450902 podStartE2EDuration="2m3.865450902s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.86427524 +0000 UTC m=+145.025547509" watchObservedRunningTime="2026-01-26 16:07:49.865450902 +0000 UTC m=+145.026723171" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.894189 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.894330 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.394307475 +0000 UTC m=+145.555579754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.894786 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.895541 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.395529658 +0000 UTC m=+145.556802017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.918910 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lprxn" podStartSLOduration=123.91889437 podStartE2EDuration="2m3.91889437s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.917318137 +0000 UTC m=+145.078590406" watchObservedRunningTime="2026-01-26 16:07:49.91889437 +0000 UTC m=+145.080166639" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.942613 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" podStartSLOduration=123.942594481 podStartE2EDuration="2m3.942594481s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.940349549 +0000 UTC m=+145.101621818" watchObservedRunningTime="2026-01-26 16:07:49.942594481 +0000 UTC m=+145.103866750" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.974116 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podStartSLOduration=123.974100216 podStartE2EDuration="2m3.974100216s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:49.971987548 +0000 UTC m=+145.133259807" watchObservedRunningTime="2026-01-26 16:07:49.974100216 +0000 UTC m=+145.135372485" Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.996210 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.996400 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.496375228 +0000 UTC m=+145.657647497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:49 crc kubenswrapper[4680]: I0126 16:07:49.996540 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:49 crc kubenswrapper[4680]: E0126 16:07:49.996803 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.496789519 +0000 UTC m=+145.658061788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.097954 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.098139 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.598115581 +0000 UTC m=+145.759387840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.098360 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.098670 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.598658986 +0000 UTC m=+145.759931255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.121760 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:50 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:50 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:50 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.121812 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.199426 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.199717 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.699703311 +0000 UTC m=+145.860975580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.301125 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.301502 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.801486876 +0000 UTC m=+145.962759145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.402126 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.402322 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.902302834 +0000 UTC m=+146.063575093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.402481 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.402762 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:50.902754036 +0000 UTC m=+146.064026305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.503713 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.503933 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.003901404 +0000 UTC m=+146.165173673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.504307 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.504667 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.004655404 +0000 UTC m=+146.165927733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.538092 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hlx8"] Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.539157 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.541870 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.554015 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hlx8"] Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.605669 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.605783 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.105757031 +0000 UTC m=+146.267029300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.605966 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.606323 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.106314336 +0000 UTC m=+146.267586605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.707167 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.707274 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.207248918 +0000 UTC m=+146.368521187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.707467 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.707511 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-catalog-content\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.707587 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-utilities\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.707719 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7gh\" (UniqueName: \"kubernetes.io/projected/4518f8bc-7ce9-40ee-8b35-263609e549aa-kube-api-access-fj7gh\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.707753 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.207746251 +0000 UTC m=+146.369018520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.727325 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.727388 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.733109 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kz692" event={"ID":"ed520de6-ce85-4283-b919-ecb2b6158668","Type":"ContainerStarted","Data":"ade2b76d329476b697c7b8920a6627cce434dd5d19759bf68a081e84f48ab414"} Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.734462 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k27bz" event={"ID":"fc716f95-70ff-431c-8234-d4f7bf46a08f","Type":"ContainerStarted","Data":"2a364660da5bccb875fcd11725ffcef58cda611310eeac5ba923efd211a03a21"} Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.735711 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsnkg"] Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.736761 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-46pbk" event={"ID":"4d377938-8d0c-42fe-b1ad-5d105de349be","Type":"ContainerStarted","Data":"7deb0c5a1a3cb7689bf6b2e1a0a73aba74eff832b9f5e535be2e60bf4f52cd75"} Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.736856 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.738128 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hhs6d" event={"ID":"2dead528-ae26-4834-8971-b119b113c807","Type":"ContainerStarted","Data":"47d389018222f554aeec5fa121ab7a838652fddcce03252552d5254c5adee33f"} Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.738607 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vvkm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.738651 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.741749 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.775439 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsnkg"] Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.807465 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hhs6d" podStartSLOduration=10.807446989 podStartE2EDuration="10.807446989s" podCreationTimestamp="2026-01-26 16:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:50.803491361 +0000 UTC m=+145.964763640" watchObservedRunningTime="2026-01-26 16:07:50.807446989 +0000 UTC m=+145.968719258" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.808334 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.808603 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-utilities\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.808642 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.308622132 +0000 UTC m=+146.469894451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.808716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7gh\" (UniqueName: \"kubernetes.io/projected/4518f8bc-7ce9-40ee-8b35-263609e549aa-kube-api-access-fj7gh\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.808866 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.808899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-catalog-content\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.808998 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-utilities\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.809191 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.309179207 +0000 UTC m=+146.470451476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.809444 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-catalog-content\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.842001 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7gh\" (UniqueName: \"kubernetes.io/projected/4518f8bc-7ce9-40ee-8b35-263609e549aa-kube-api-access-fj7gh\") pod \"community-operators-7hlx8\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.850853 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.909794 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.910312 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-catalog-content\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.910497 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljp5\" (UniqueName: \"kubernetes.io/projected/692a260c-34fe-45b3-8ee0-1f438a630beb-kube-api-access-8ljp5\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.910645 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-utilities\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:50 crc kubenswrapper[4680]: E0126 16:07:50.911799 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.411771924 +0000 UTC m=+146.573044183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.936345 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7q4mf"] Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.937254 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:50 crc kubenswrapper[4680]: I0126 16:07:50.967954 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q4mf"] Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.012542 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-catalog-content\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.012593 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljp5\" (UniqueName: \"kubernetes.io/projected/692a260c-34fe-45b3-8ee0-1f438a630beb-kube-api-access-8ljp5\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.012628 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.012655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-utilities\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.013148 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-utilities\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.013208 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.513192569 +0000 UTC m=+146.674464838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.013390 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-catalog-content\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.046298 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljp5\" (UniqueName: \"kubernetes.io/projected/692a260c-34fe-45b3-8ee0-1f438a630beb-kube-api-access-8ljp5\") pod \"certified-operators-fsnkg\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.050485 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.113604 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.113955 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrg9b\" (UniqueName: \"kubernetes.io/projected/2267683a-dbc9-4689-8529-15afc7b2df37-kube-api-access-lrg9b\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.114031 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-utilities\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.114166 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-catalog-content\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.114304 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.614274085 +0000 UTC m=+146.775546354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.133206 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:51 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:51 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:51 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.133258 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.140759 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tzrq7"] Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.141990 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.160517 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzrq7"] Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.219798 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-catalog-content\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.219839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrg9b\" (UniqueName: \"kubernetes.io/projected/2267683a-dbc9-4689-8529-15afc7b2df37-kube-api-access-lrg9b\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.219874 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.219913 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-utilities\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.220417 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-utilities\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.220620 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-catalog-content\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.221080 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.721058407 +0000 UTC m=+146.882330676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.253113 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrg9b\" (UniqueName: \"kubernetes.io/projected/2267683a-dbc9-4689-8529-15afc7b2df37-kube-api-access-lrg9b\") pod \"community-operators-7q4mf\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.253382 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.320774 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.321026 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmzs\" (UniqueName: \"kubernetes.io/projected/13ea8ffe-97fe-4168-81e7-4816da782f9a-kube-api-access-hgmzs\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.324598 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.82457817 +0000 UTC m=+146.985850439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.321058 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-catalog-content\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.324711 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.324776 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-utilities\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.325119 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.825112145 +0000 UTC m=+146.986384414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.342914 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hlx8"] Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.430600 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.430930 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmzs\" (UniqueName: \"kubernetes.io/projected/13ea8ffe-97fe-4168-81e7-4816da782f9a-kube-api-access-hgmzs\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.430960 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-catalog-content\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.431012 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-utilities\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.431482 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-catalog-content\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.431771 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:51.931749933 +0000 UTC m=+147.093022202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.432518 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-utilities\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.462247 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmzs\" (UniqueName: \"kubernetes.io/projected/13ea8ffe-97fe-4168-81e7-4816da782f9a-kube-api-access-hgmzs\") pod \"certified-operators-tzrq7\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.489196 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.519963 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.541533 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.541867 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.041855677 +0000 UTC m=+147.203127946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.633922 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.642181 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.642519 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.1424957 +0000 UTC m=+147.303767969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.744121 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.745806 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.245790187 +0000 UTC m=+147.407062456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.749955 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hlx8" event={"ID":"4518f8bc-7ce9-40ee-8b35-263609e549aa","Type":"ContainerStarted","Data":"bc41e70dd3828a44cf65b84d1031f21030d909d654112e693d871043f2cf6b0a"} Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.755890 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"5e48ce683c69f87c7bc43e37ec414f3de70e9c48a6544094a1bb4c120ba5b0c4"} Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.757266 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.788262 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsnkg"] Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.845095 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.845270 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.345243558 +0000 UTC m=+147.506515827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.845657 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.848904 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.348889568 +0000 UTC m=+147.510161837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:51 crc kubenswrapper[4680]: I0126 16:07:51.946515 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:51 crc kubenswrapper[4680]: E0126 16:07:51.946852 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.446838168 +0000 UTC m=+147.608110437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.047843 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.048253 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.548237842 +0000 UTC m=+147.709510111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.133769 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:52 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:52 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:52 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.133822 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.137885 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.142535 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.157593 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.158062 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.658037508 +0000 UTC m=+147.819309777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.158178 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.159332 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.659324513 +0000 UTC m=+147.820596782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.168290 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.217317 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.217353 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.252289 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.253337 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.258175 4680 patch_prober.go:28] interesting pod/console-f9d7485db-z2kjp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.258379 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z2kjp" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.258810 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.259217 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.759203486 +0000 UTC m=+147.920475755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.260076 4680 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dqgwn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]log ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]etcd ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/max-in-flight-filter ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 16:07:52 crc kubenswrapper[4680]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 16:07:52 crc kubenswrapper[4680]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-startinformers ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 16:07:52 crc kubenswrapper[4680]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 16:07:52 crc kubenswrapper[4680]: livez check failed Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.260104 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" podUID="debda3fd-3a5a-4f25-b732-90eb3bade1d4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.316690 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzrq7"] Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.360504 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.361794 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.861783153 +0000 UTC m=+148.023055422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.442732 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q4mf"] Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.461260 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.461563 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:52.961546542 +0000 UTC m=+148.122818811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.563217 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.566499 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.066485874 +0000 UTC m=+148.227758143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.664824 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.664957 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.164936787 +0000 UTC m=+148.326209056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.665334 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.665559 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.666687 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.166675215 +0000 UTC m=+148.327947484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.671780 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.762110 4680 generic.go:334] "Generic (PLEG): container finished" podID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerID="48539145cd50102d8b769a1fc4543e819953175efcbcce273e720346b063090a" exitCode=0 Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.763480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzrq7" event={"ID":"13ea8ffe-97fe-4168-81e7-4816da782f9a","Type":"ContainerDied","Data":"48539145cd50102d8b769a1fc4543e819953175efcbcce273e720346b063090a"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.763514 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzrq7" event={"ID":"13ea8ffe-97fe-4168-81e7-4816da782f9a","Type":"ContainerStarted","Data":"178bae32ea9e4d4b51509a26a55ee50921b72ffa8347dcbc0ce7b8e066bb7555"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.763873 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.766575 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.766732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.766860 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.766962 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.767990 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.267967797 +0000 UTC m=+148.429240066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.768597 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.768813 4680 generic.go:334] "Generic (PLEG): container finished" podID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerID="543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce" exitCode=0 Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.769287 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hlx8" event={"ID":"4518f8bc-7ce9-40ee-8b35-263609e549aa","Type":"ContainerDied","Data":"543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.770829 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.771633 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.771991 4680 generic.go:334] "Generic (PLEG): container finished" podID="2267683a-dbc9-4689-8529-15afc7b2df37" containerID="435546af1eefe45fd83910c4c446f4ed9250a8d3a8d7787ee61b5837625e2253" exitCode=0 Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.772132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q4mf" event={"ID":"2267683a-dbc9-4689-8529-15afc7b2df37","Type":"ContainerDied","Data":"435546af1eefe45fd83910c4c446f4ed9250a8d3a8d7787ee61b5837625e2253"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.772222 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q4mf" event={"ID":"2267683a-dbc9-4689-8529-15afc7b2df37","Type":"ContainerStarted","Data":"d2a2a9f9ba8ff4832c1f1b8a6b2f1072ee79ec138da570c51ad8647d6adeb07b"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.775475 4680 generic.go:334] "Generic (PLEG): container finished" podID="52101413-4b6f-4b34-bbfc-27d16b75b2a1" containerID="94100a6452451499c0321432e02101e0a48076e99dd9b1498a45c4952ba7e9a0" exitCode=0 Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.775613 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" event={"ID":"52101413-4b6f-4b34-bbfc-27d16b75b2a1","Type":"ContainerDied","Data":"94100a6452451499c0321432e02101e0a48076e99dd9b1498a45c4952ba7e9a0"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.777709 4680 generic.go:334] "Generic (PLEG): container finished" podID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerID="2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185" exitCode=0 Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.777876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnkg" event={"ID":"692a260c-34fe-45b3-8ee0-1f438a630beb","Type":"ContainerDied","Data":"2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.778000 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnkg" event={"ID":"692a260c-34fe-45b3-8ee0-1f438a630beb","Type":"ContainerStarted","Data":"24e403ffbd69a0f2f9b331b806380daf42afa879f6cdc01ceb28c3b31703e26c"} Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.783003 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.787165 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7q7t5" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.871859 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.873723 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.373710481 +0000 UTC m=+148.534982740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.977182 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-76zsc"] Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.978395 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.981504 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:52 crc kubenswrapper[4680]: E0126 16:07:52.981862 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.48184626 +0000 UTC m=+148.643118529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.987110 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.988208 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 16:07:52 crc kubenswrapper[4680]: I0126 16:07:52.994618 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.009291 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zsc"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.082716 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-utilities\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.082789 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-catalog-content\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.082820 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkz4\" (UniqueName: \"kubernetes.io/projected/a25355b2-4808-4605-a4a7-b51d677ad232-kube-api-access-4pkz4\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.082892 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.083502 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.583491611 +0000 UTC m=+148.744763880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.135992 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:53 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:53 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:53 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.136370 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.184150 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.184333 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-utilities\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.184379 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-catalog-content\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.184399 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pkz4\" (UniqueName: \"kubernetes.io/projected/a25355b2-4808-4605-a4a7-b51d677ad232-kube-api-access-4pkz4\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.184799 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.684785653 +0000 UTC m=+148.846057922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.185477 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-utilities\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.185686 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-catalog-content\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.213452 4680 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.243930 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pkz4\" (UniqueName: \"kubernetes.io/projected/a25355b2-4808-4605-a4a7-b51d677ad232-kube-api-access-4pkz4\") pod \"redhat-marketplace-76zsc\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.286515 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.286849 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.786837555 +0000 UTC m=+148.948109824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.326463 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.332601 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ppntp"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.343832 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.344730 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.344755 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.345239 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.345281 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.360260 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppntp"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.390657 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.390959 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.890944094 +0000 UTC m=+149.052216353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.495775 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-utilities\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.495832 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.495867 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xx8\" (UniqueName: \"kubernetes.io/projected/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-kube-api-access-62xx8\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.495892 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-catalog-content\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.496280 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:53.996264946 +0000 UTC m=+149.157537215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.563856 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.565727 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.569911 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.570216 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.581380 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.600783 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.600930 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-utilities\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.600983 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xx8\" (UniqueName: \"kubernetes.io/projected/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-kube-api-access-62xx8\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.601002 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-catalog-content\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.601026 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/162bda9b-b1b5-476b-b7e7-bdfe55922030-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.601053 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162bda9b-b1b5-476b-b7e7-bdfe55922030-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.601172 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:07:54.101148207 +0000 UTC m=+149.262420466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.601512 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-utilities\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.601949 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-catalog-content\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.629827 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xx8\" (UniqueName: \"kubernetes.io/projected/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-kube-api-access-62xx8\") pod \"redhat-marketplace-ppntp\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.675604 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.681536 4680 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T16:07:53.213471321Z","Handler":null,"Name":""} Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.701812 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.701865 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/162bda9b-b1b5-476b-b7e7-bdfe55922030-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.701949 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162bda9b-b1b5-476b-b7e7-bdfe55922030-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.702366 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/162bda9b-b1b5-476b-b7e7-bdfe55922030-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: E0126 16:07:53.702836 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 16:07:54.202298854 +0000 UTC m=+149.363571123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pzz4v" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.737938 4680 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.737997 4680 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.738621 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162bda9b-b1b5-476b-b7e7-bdfe55922030-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.741827 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s74p8"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.748915 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.766500 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.772820 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s74p8"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.802652 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.824637 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.838182 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0546317a97849d52844b4560389db055f9ee1000e950ad85341530985882928d"} Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.838226 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c85d34a8deb8b0ddd3f2f8edbe80334235f5ffe391727c534828016866590ffe"} Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.844039 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0249d2719deb7c02a5135d71b59234cea6681925064d1cd52fda5b8f15c470b6"} Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.866909 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"07f03d2e14dfad0dbc240fa172231ceae33fb5f99c93ff41a40734df5fed1653"} Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.866946 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"2752ab68e37e6bc7b298f87de1cc4daffdcb840af8f9079d2825d7b0eb9d5303"} Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.892439 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.904258 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.904355 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbbv\" (UniqueName: \"kubernetes.io/projected/0541f242-a3cd-490a-9e63-3f1278f05dc6-kube-api-access-4rbbv\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.904458 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-utilities\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.904477 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-catalog-content\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.917382 4680 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.917433 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.941580 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4njsq"] Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.943987 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:53 crc kubenswrapper[4680]: I0126 16:07:53.955872 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4njsq"] Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.005979 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbbv\" (UniqueName: \"kubernetes.io/projected/0541f242-a3cd-490a-9e63-3f1278f05dc6-kube-api-access-4rbbv\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.006046 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-utilities\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.006084 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-catalog-content\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.008204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-catalog-content\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.009013 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-utilities\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.021944 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zsc"] Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.045908 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbbv\" (UniqueName: \"kubernetes.io/projected/0541f242-a3cd-490a-9e63-3f1278f05dc6-kube-api-access-4rbbv\") pod \"redhat-operators-s74p8\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.074516 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pzz4v\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.088416 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.106835 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-catalog-content\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.113419 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-utilities\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.113525 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bmvf\" (UniqueName: \"kubernetes.io/projected/1468cde4-a721-4295-8f4c-e81d2d68a843-kube-api-access-6bmvf\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.121456 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.128445 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:54 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:54 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:54 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.128495 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.214883 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-utilities\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.214915 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bmvf\" (UniqueName: \"kubernetes.io/projected/1468cde4-a721-4295-8f4c-e81d2d68a843-kube-api-access-6bmvf\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.214985 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-catalog-content\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.215356 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-catalog-content\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.215510 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-utilities\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.234980 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.261152 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bmvf\" (UniqueName: \"kubernetes.io/projected/1468cde4-a721-4295-8f4c-e81d2d68a843-kube-api-access-6bmvf\") pod \"redhat-operators-4njsq\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.332790 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.395852 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.465294 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.481781 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:54 crc kubenswrapper[4680]: W0126 16:07:54.556867 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod162bda9b_b1b5_476b_b7e7_bdfe55922030.slice/crio-1cf46ed9942e5500ed686823740342bb2ca7186c8d7f31ce4930909fd1f17bb7 WatchSource:0}: Error finding container 1cf46ed9942e5500ed686823740342bb2ca7186c8d7f31ce4930909fd1f17bb7: Status 404 returned error can't find the container with id 1cf46ed9942e5500ed686823740342bb2ca7186c8d7f31ce4930909fd1f17bb7 Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.634419 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65lqx\" (UniqueName: \"kubernetes.io/projected/52101413-4b6f-4b34-bbfc-27d16b75b2a1-kube-api-access-65lqx\") pod \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.634499 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52101413-4b6f-4b34-bbfc-27d16b75b2a1-config-volume\") pod \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.634594 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52101413-4b6f-4b34-bbfc-27d16b75b2a1-secret-volume\") pod \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\" (UID: \"52101413-4b6f-4b34-bbfc-27d16b75b2a1\") " Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.639085 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52101413-4b6f-4b34-bbfc-27d16b75b2a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "52101413-4b6f-4b34-bbfc-27d16b75b2a1" (UID: "52101413-4b6f-4b34-bbfc-27d16b75b2a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.641571 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52101413-4b6f-4b34-bbfc-27d16b75b2a1-kube-api-access-65lqx" (OuterVolumeSpecName: "kube-api-access-65lqx") pod "52101413-4b6f-4b34-bbfc-27d16b75b2a1" (UID: "52101413-4b6f-4b34-bbfc-27d16b75b2a1"). InnerVolumeSpecName "kube-api-access-65lqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.643499 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101413-4b6f-4b34-bbfc-27d16b75b2a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52101413-4b6f-4b34-bbfc-27d16b75b2a1" (UID: "52101413-4b6f-4b34-bbfc-27d16b75b2a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.645765 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppntp"] Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.739384 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52101413-4b6f-4b34-bbfc-27d16b75b2a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.740438 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65lqx\" (UniqueName: \"kubernetes.io/projected/52101413-4b6f-4b34-bbfc-27d16b75b2a1-kube-api-access-65lqx\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.740493 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52101413-4b6f-4b34-bbfc-27d16b75b2a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.764212 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s74p8"] Jan 26 16:07:54 crc kubenswrapper[4680]: W0126 16:07:54.774168 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef753eab_7bb6_4da0_a1ff_e6f8ed635cd1.slice/crio-9bac44c193cc1b02f6afca80387810cc17d411d5020ebd352df7d100fc7140cf WatchSource:0}: Error finding container 9bac44c193cc1b02f6afca80387810cc17d411d5020ebd352df7d100fc7140cf: Status 404 returned error can't find the container with id 9bac44c193cc1b02f6afca80387810cc17d411d5020ebd352df7d100fc7140cf Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.910879 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"162bda9b-b1b5-476b-b7e7-bdfe55922030","Type":"ContainerStarted","Data":"1cf46ed9942e5500ed686823740342bb2ca7186c8d7f31ce4930909fd1f17bb7"} Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.912019 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppntp" event={"ID":"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1","Type":"ContainerStarted","Data":"9bac44c193cc1b02f6afca80387810cc17d411d5020ebd352df7d100fc7140cf"} Jan 26 16:07:54 crc kubenswrapper[4680]: I0126 16:07:54.935279 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4a1e5f89ed4899fe3f20809b7f07f21a9a7082657af84935b148235a17e50ec9"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.005480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"64e42ddfabd3b4d258aeb6a2cff61be02c5f44430e638e86c5c9130aa91ad795"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.040448 4680 generic.go:334] "Generic (PLEG): container finished" podID="a25355b2-4808-4605-a4a7-b51d677ad232" containerID="0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707" exitCode=0 Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.040558 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zsc" event={"ID":"a25355b2-4808-4605-a4a7-b51d677ad232","Type":"ContainerDied","Data":"0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.040583 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zsc" event={"ID":"a25355b2-4808-4605-a4a7-b51d677ad232","Type":"ContainerStarted","Data":"6487b9845a58b322e761f9059a7f36265773e1c45483194d09db96c4371f091d"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.053390 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podStartSLOduration=14.053373075 podStartE2EDuration="14.053373075s" podCreationTimestamp="2026-01-26 16:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:55.049415106 +0000 UTC m=+150.210687365" watchObservedRunningTime="2026-01-26 16:07:55.053373075 +0000 UTC m=+150.214645334" Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.063481 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2fab998d76f7eca773ccae72f9a6c8d05db36eb95094a0b551787241dcde9ba0"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.063928 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d7075bdb3054c92da46e8664e1c3e326f670d5e286cb153bd49bde7f4fac4762"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.064581 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.075511 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" event={"ID":"52101413-4b6f-4b34-bbfc-27d16b75b2a1","Type":"ContainerDied","Data":"9eba4e56574049e14de0447d15e277642f247a17113920ea943e0f83c54f37f6"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.075548 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eba4e56574049e14de0447d15e277642f247a17113920ea943e0f83c54f37f6" Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.075615 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft" Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.075888 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pzz4v"] Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.093168 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerStarted","Data":"a4b46eb7693c2f559f48680f33d6efe6f965e0b2dc39e1287ab8be5805bd7c93"} Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.132235 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:55 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:55 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:55 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.132281 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:55 crc kubenswrapper[4680]: W0126 16:07:55.159387 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdfe3694_fc1a_4262_85ea_413fad222b35.slice/crio-0bc26e327f728c2eccf566e65664de60920a8cdfad680f153d33bac17ad70966 WatchSource:0}: Error finding container 0bc26e327f728c2eccf566e65664de60920a8cdfad680f153d33bac17ad70966: Status 404 returned error can't find the container with id 0bc26e327f728c2eccf566e65664de60920a8cdfad680f153d33bac17ad70966 Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.288501 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 16:07:55 crc kubenswrapper[4680]: I0126 16:07:55.289266 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4njsq"] Jan 26 16:07:55 crc kubenswrapper[4680]: W0126 16:07:55.314499 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1468cde4_a721_4295_8f4c_e81d2d68a843.slice/crio-40a527abd5937bb098870a331009715fb66a680fed6aea60729292ecee8d3db4 WatchSource:0}: Error finding container 40a527abd5937bb098870a331009715fb66a680fed6aea60729292ecee8d3db4: Status 404 returned error can't find the container with id 40a527abd5937bb098870a331009715fb66a680fed6aea60729292ecee8d3db4 Jan 26 16:07:56 crc kubenswrapper[4680]: I0126 16:07:56.115880 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" event={"ID":"bdfe3694-fc1a-4262-85ea-413fad222b35","Type":"ContainerStarted","Data":"0bc26e327f728c2eccf566e65664de60920a8cdfad680f153d33bac17ad70966"} Jan 26 16:07:56 crc kubenswrapper[4680]: I0126 16:07:56.121743 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:56 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:56 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:56 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:56 crc kubenswrapper[4680]: I0126 16:07:56.121783 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:56 crc kubenswrapper[4680]: I0126 16:07:56.123472 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4njsq" event={"ID":"1468cde4-a721-4295-8f4c-e81d2d68a843","Type":"ContainerStarted","Data":"40a527abd5937bb098870a331009715fb66a680fed6aea60729292ecee8d3db4"} Jan 26 16:07:56 crc kubenswrapper[4680]: E0126 16:07:56.637301 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1468cde4_a721_4295_8f4c_e81d2d68a843.slice/crio-conmon-6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.122062 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:57 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:57 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:57 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.122323 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.154903 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"162bda9b-b1b5-476b-b7e7-bdfe55922030","Type":"ContainerStarted","Data":"47d456c867c4e60d5dffe518c4158b2f71134924bfb7d682e0c6755abf66b0f2"} Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.173887 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerID="fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75" exitCode=0 Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.175351 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.175336076 podStartE2EDuration="4.175336076s" podCreationTimestamp="2026-01-26 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:57.170628726 +0000 UTC m=+152.331900995" watchObservedRunningTime="2026-01-26 16:07:57.175336076 +0000 UTC m=+152.336608345" Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.182890 4680 generic.go:334] "Generic (PLEG): container finished" podID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerID="6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2" exitCode=0 Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.232651 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.232691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppntp" event={"ID":"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1","Type":"ContainerDied","Data":"fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75"} Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.232720 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4njsq" event={"ID":"1468cde4-a721-4295-8f4c-e81d2d68a843","Type":"ContainerDied","Data":"6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2"} Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.244673 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-dqgwn" Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.365378 4680 generic.go:334] "Generic (PLEG): container finished" podID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerID="283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f" exitCode=0 Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.365472 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerDied","Data":"283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f"} Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.370139 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" event={"ID":"bdfe3694-fc1a-4262-85ea-413fad222b35","Type":"ContainerStarted","Data":"de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9"} Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.370684 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:07:57 crc kubenswrapper[4680]: I0126 16:07:57.447156 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" podStartSLOduration=131.44714168 podStartE2EDuration="2m11.44714168s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:07:57.444158098 +0000 UTC m=+152.605430367" watchObservedRunningTime="2026-01-26 16:07:57.44714168 +0000 UTC m=+152.608413949" Jan 26 16:07:58 crc kubenswrapper[4680]: I0126 16:07:58.124596 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:58 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:58 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:58 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:58 crc kubenswrapper[4680]: I0126 16:07:58.124666 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:58 crc kubenswrapper[4680]: I0126 16:07:58.406302 4680 generic.go:334] "Generic (PLEG): container finished" podID="162bda9b-b1b5-476b-b7e7-bdfe55922030" containerID="47d456c867c4e60d5dffe518c4158b2f71134924bfb7d682e0c6755abf66b0f2" exitCode=0 Jan 26 16:07:58 crc kubenswrapper[4680]: I0126 16:07:58.406352 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"162bda9b-b1b5-476b-b7e7-bdfe55922030","Type":"ContainerDied","Data":"47d456c867c4e60d5dffe518c4158b2f71134924bfb7d682e0c6755abf66b0f2"} Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.123340 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:07:59 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:07:59 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:07:59 crc kubenswrapper[4680]: healthz check failed Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.123392 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.422686 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hhs6d" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.624247 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 16:07:59 crc kubenswrapper[4680]: E0126 16:07:59.624783 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52101413-4b6f-4b34-bbfc-27d16b75b2a1" containerName="collect-profiles" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.624795 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="52101413-4b6f-4b34-bbfc-27d16b75b2a1" containerName="collect-profiles" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.624891 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="52101413-4b6f-4b34-bbfc-27d16b75b2a1" containerName="collect-profiles" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.625303 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.648290 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.653187 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.653407 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.674958 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.675462 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.776437 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.776530 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.776593 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.798801 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.802227 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.877666 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162bda9b-b1b5-476b-b7e7-bdfe55922030-kube-api-access\") pod \"162bda9b-b1b5-476b-b7e7-bdfe55922030\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.877785 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/162bda9b-b1b5-476b-b7e7-bdfe55922030-kubelet-dir\") pod \"162bda9b-b1b5-476b-b7e7-bdfe55922030\" (UID: \"162bda9b-b1b5-476b-b7e7-bdfe55922030\") " Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.879000 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/162bda9b-b1b5-476b-b7e7-bdfe55922030-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "162bda9b-b1b5-476b-b7e7-bdfe55922030" (UID: "162bda9b-b1b5-476b-b7e7-bdfe55922030"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.909232 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162bda9b-b1b5-476b-b7e7-bdfe55922030-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "162bda9b-b1b5-476b-b7e7-bdfe55922030" (UID: "162bda9b-b1b5-476b-b7e7-bdfe55922030"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.961595 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.979497 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/162bda9b-b1b5-476b-b7e7-bdfe55922030-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:59 crc kubenswrapper[4680]: I0126 16:07:59.979532 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162bda9b-b1b5-476b-b7e7-bdfe55922030-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:00 crc kubenswrapper[4680]: I0126 16:08:00.124198 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:00 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:00 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:00 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:00 crc kubenswrapper[4680]: I0126 16:08:00.124570 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:00 crc kubenswrapper[4680]: I0126 16:08:00.435992 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 16:08:00 crc kubenswrapper[4680]: I0126 16:08:00.470129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"162bda9b-b1b5-476b-b7e7-bdfe55922030","Type":"ContainerDied","Data":"1cf46ed9942e5500ed686823740342bb2ca7186c8d7f31ce4930909fd1f17bb7"} Jan 26 16:08:00 crc kubenswrapper[4680]: I0126 16:08:00.470167 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf46ed9942e5500ed686823740342bb2ca7186c8d7f31ce4930909fd1f17bb7" Jan 26 16:08:00 crc kubenswrapper[4680]: I0126 16:08:00.470191 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 16:08:01 crc kubenswrapper[4680]: I0126 16:08:01.121947 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:01 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:01 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:01 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:01 crc kubenswrapper[4680]: I0126 16:08:01.122298 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:01 crc kubenswrapper[4680]: I0126 16:08:01.491869 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eaed1c5-7c6a-4919-88fc-b01d530d73c1","Type":"ContainerStarted","Data":"f3a4420abdb7439da7205995fadde48a30888a333e4bf5e6b802e8716553d407"} Jan 26 16:08:02 crc kubenswrapper[4680]: I0126 16:08:02.122483 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:02 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:02 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:02 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:02 crc kubenswrapper[4680]: I0126 16:08:02.123029 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:02 crc kubenswrapper[4680]: I0126 16:08:02.253463 4680 patch_prober.go:28] interesting pod/console-f9d7485db-z2kjp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 16:08:02 crc kubenswrapper[4680]: I0126 16:08:02.253563 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z2kjp" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 16:08:02 crc kubenswrapper[4680]: I0126 16:08:02.544588 4680 generic.go:334] "Generic (PLEG): container finished" podID="9eaed1c5-7c6a-4919-88fc-b01d530d73c1" containerID="168ad310d7a42cdae46317e95f6b3dd89d06b5d3b2db2c4cc49baebf6ad56852" exitCode=0 Jan 26 16:08:02 crc kubenswrapper[4680]: I0126 16:08:02.544684 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eaed1c5-7c6a-4919-88fc-b01d530d73c1","Type":"ContainerDied","Data":"168ad310d7a42cdae46317e95f6b3dd89d06b5d3b2db2c4cc49baebf6ad56852"} Jan 26 16:08:03 crc kubenswrapper[4680]: I0126 16:08:03.120917 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:03 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:03 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:03 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:03 crc kubenswrapper[4680]: I0126 16:08:03.120977 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:03 crc kubenswrapper[4680]: I0126 16:08:03.352551 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fgknk" Jan 26 16:08:04 crc kubenswrapper[4680]: I0126 16:08:04.122121 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:04 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:04 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:04 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:04 crc kubenswrapper[4680]: I0126 16:08:04.122567 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:05 crc kubenswrapper[4680]: I0126 16:08:05.120886 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:05 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:05 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:05 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:05 crc kubenswrapper[4680]: I0126 16:08:05.120928 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:06 crc kubenswrapper[4680]: I0126 16:08:06.121712 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 16:08:06 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 16:08:06 crc kubenswrapper[4680]: [+]process-running ok Jan 26 16:08:06 crc kubenswrapper[4680]: healthz check failed Jan 26 16:08:06 crc kubenswrapper[4680]: I0126 16:08:06.121778 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:08:07 crc kubenswrapper[4680]: I0126 16:08:07.135839 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:08:07 crc kubenswrapper[4680]: I0126 16:08:07.148724 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 16:08:07 crc kubenswrapper[4680]: I0126 16:08:07.902929 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:08:07 crc kubenswrapper[4680]: I0126 16:08:07.923160 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40816c76-44c8-4161-84f3-b1693d48aeaa-metrics-certs\") pod \"network-metrics-daemon-fbl6p\" (UID: \"40816c76-44c8-4161-84f3-b1693d48aeaa\") " pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:08:08 crc kubenswrapper[4680]: I0126 16:08:08.088675 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fbl6p" Jan 26 16:08:12 crc kubenswrapper[4680]: I0126 16:08:12.255999 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:08:12 crc kubenswrapper[4680]: I0126 16:08:12.261385 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:08:14 crc kubenswrapper[4680]: I0126 16:08:14.242121 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:08:16 crc kubenswrapper[4680]: I0126 16:08:16.981052 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:08:16 crc kubenswrapper[4680]: I0126 16:08:16.981169 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:08:24 crc kubenswrapper[4680]: I0126 16:08:24.104574 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.289167 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.332086 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kube-api-access\") pod \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.332176 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kubelet-dir\") pod \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\" (UID: \"9eaed1c5-7c6a-4919-88fc-b01d530d73c1\") " Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.332511 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9eaed1c5-7c6a-4919-88fc-b01d530d73c1" (UID: "9eaed1c5-7c6a-4919-88fc-b01d530d73c1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4680]: E0126 16:08:30.338710 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 16:08:30 crc kubenswrapper[4680]: E0126 16:08:30.338889 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4njsq_openshift-marketplace(1468cde4-a721-4295-8f4c-e81d2d68a843): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:30 crc kubenswrapper[4680]: E0126 16:08:30.340631 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4njsq" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.352835 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9eaed1c5-7c6a-4919-88fc-b01d530d73c1" (UID: "9eaed1c5-7c6a-4919-88fc-b01d530d73c1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.434248 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.434290 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eaed1c5-7c6a-4919-88fc-b01d530d73c1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.748881 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eaed1c5-7c6a-4919-88fc-b01d530d73c1","Type":"ContainerDied","Data":"f3a4420abdb7439da7205995fadde48a30888a333e4bf5e6b802e8716553d407"} Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.749184 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a4420abdb7439da7205995fadde48a30888a333e4bf5e6b802e8716553d407" Jan 26 16:08:30 crc kubenswrapper[4680]: I0126 16:08:30.748893 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: E0126 16:08:32.456408 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4njsq" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" Jan 26 16:08:32 crc kubenswrapper[4680]: E0126 16:08:32.548775 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 16:08:32 crc kubenswrapper[4680]: E0126 16:08:32.548980 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ljp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fsnkg_openshift-marketplace(692a260c-34fe-45b3-8ee0-1f438a630beb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:32 crc kubenswrapper[4680]: E0126 16:08:32.550825 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fsnkg" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.807974 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 16:08:32 crc kubenswrapper[4680]: E0126 16:08:32.808977 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaed1c5-7c6a-4919-88fc-b01d530d73c1" containerName="pruner" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.809051 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaed1c5-7c6a-4919-88fc-b01d530d73c1" containerName="pruner" Jan 26 16:08:32 crc kubenswrapper[4680]: E0126 16:08:32.809166 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162bda9b-b1b5-476b-b7e7-bdfe55922030" containerName="pruner" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.809225 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="162bda9b-b1b5-476b-b7e7-bdfe55922030" containerName="pruner" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.809334 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eaed1c5-7c6a-4919-88fc-b01d530d73c1" containerName="pruner" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.809344 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="162bda9b-b1b5-476b-b7e7-bdfe55922030" containerName="pruner" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.810038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.813583 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.813806 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.820407 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.863960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/744faecd-2eb7-4756-8be3-b532feb69b24-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.864122 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/744faecd-2eb7-4756-8be3-b532feb69b24-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.965122 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/744faecd-2eb7-4756-8be3-b532feb69b24-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.965174 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/744faecd-2eb7-4756-8be3-b532feb69b24-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.965244 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/744faecd-2eb7-4756-8be3-b532feb69b24-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.984867 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/744faecd-2eb7-4756-8be3-b532feb69b24-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:32 crc kubenswrapper[4680]: I0126 16:08:32.991711 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:08:33 crc kubenswrapper[4680]: I0126 16:08:33.143794 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:34 crc kubenswrapper[4680]: E0126 16:08:34.181229 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fsnkg" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" Jan 26 16:08:34 crc kubenswrapper[4680]: E0126 16:08:34.233572 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 16:08:34 crc kubenswrapper[4680]: E0126 16:08:34.233717 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7q4mf_openshift-marketplace(2267683a-dbc9-4689-8529-15afc7b2df37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:34 crc kubenswrapper[4680]: E0126 16:08:34.235274 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7q4mf" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.306495 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7q4mf" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.393273 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.393870 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62xx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ppntp_openshift-marketplace(ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.395019 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ppntp" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.410838 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.410989 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pkz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-76zsc_openshift-marketplace(a25355b2-4808-4605-a4a7-b51d677ad232): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.412225 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-76zsc" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.418437 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.418567 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgmzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tzrq7_openshift-marketplace(13ea8ffe-97fe-4168-81e7-4816da782f9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.420330 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tzrq7" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.439581 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.439700 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj7gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7hlx8_openshift-marketplace(4518f8bc-7ce9-40ee-8b35-263609e549aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.441002 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7hlx8" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" Jan 26 16:08:36 crc kubenswrapper[4680]: I0126 16:08:36.745929 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 16:08:36 crc kubenswrapper[4680]: W0126 16:08:36.753278 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod744faecd_2eb7_4756_8be3_b532feb69b24.slice/crio-2450be129bb49ab5f6f572e7b8e6b3e4c64c79aa0f003830b17f85c47817a193 WatchSource:0}: Error finding container 2450be129bb49ab5f6f572e7b8e6b3e4c64c79aa0f003830b17f85c47817a193: Status 404 returned error can't find the container with id 2450be129bb49ab5f6f572e7b8e6b3e4c64c79aa0f003830b17f85c47817a193 Jan 26 16:08:36 crc kubenswrapper[4680]: I0126 16:08:36.778579 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerStarted","Data":"3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718"} Jan 26 16:08:36 crc kubenswrapper[4680]: I0126 16:08:36.780780 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"744faecd-2eb7-4756-8be3-b532feb69b24","Type":"ContainerStarted","Data":"2450be129bb49ab5f6f572e7b8e6b3e4c64c79aa0f003830b17f85c47817a193"} Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.783590 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-76zsc" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.783742 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppntp" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.783776 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7hlx8" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" Jan 26 16:08:36 crc kubenswrapper[4680]: E0126 16:08:36.788408 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tzrq7" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" Jan 26 16:08:36 crc kubenswrapper[4680]: I0126 16:08:36.874240 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fbl6p"] Jan 26 16:08:36 crc kubenswrapper[4680]: W0126 16:08:36.877720 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40816c76_44c8_4161_84f3_b1693d48aeaa.slice/crio-122b9b22bd446cf5aff6afc24483998956aa3bef74d3b40a144dbcb6ab8c4e81 WatchSource:0}: Error finding container 122b9b22bd446cf5aff6afc24483998956aa3bef74d3b40a144dbcb6ab8c4e81: Status 404 returned error can't find the container with id 122b9b22bd446cf5aff6afc24483998956aa3bef74d3b40a144dbcb6ab8c4e81 Jan 26 16:08:37 crc kubenswrapper[4680]: I0126 16:08:37.787810 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" event={"ID":"40816c76-44c8-4161-84f3-b1693d48aeaa","Type":"ContainerStarted","Data":"122b9b22bd446cf5aff6afc24483998956aa3bef74d3b40a144dbcb6ab8c4e81"} Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.408443 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.409623 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.413874 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.463372 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-var-lock\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.463433 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.463483 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.564398 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-var-lock\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.564463 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.564490 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.564594 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.564630 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-var-lock\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.586006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.725633 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.796919 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" event={"ID":"40816c76-44c8-4161-84f3-b1693d48aeaa","Type":"ContainerStarted","Data":"b9fdf98887eb1ae203a540dbcc3c7567f94fa5ad8800c99fe22cb566f7999f81"} Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.796966 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fbl6p" event={"ID":"40816c76-44c8-4161-84f3-b1693d48aeaa","Type":"ContainerStarted","Data":"a8a2ce9b807e7dff0bb3f0bd7ba1d7e19bb30498cfb4de8e47669b45826cb846"} Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.821477 4680 generic.go:334] "Generic (PLEG): container finished" podID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerID="3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718" exitCode=0 Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.821568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerDied","Data":"3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718"} Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.824041 4680 generic.go:334] "Generic (PLEG): container finished" podID="744faecd-2eb7-4756-8be3-b532feb69b24" containerID="f7053ade87d1ae7344a46ab5e6785d9bac93f86a53f3bc3ab9f848d21465398d" exitCode=0 Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.824302 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"744faecd-2eb7-4756-8be3-b532feb69b24","Type":"ContainerDied","Data":"f7053ade87d1ae7344a46ab5e6785d9bac93f86a53f3bc3ab9f848d21465398d"} Jan 26 16:08:38 crc kubenswrapper[4680]: I0126 16:08:38.833135 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fbl6p" podStartSLOduration=172.833117604 podStartE2EDuration="2m52.833117604s" podCreationTimestamp="2026-01-26 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:08:38.827388367 +0000 UTC m=+193.988660636" watchObservedRunningTime="2026-01-26 16:08:38.833117604 +0000 UTC m=+193.994389863" Jan 26 16:08:39 crc kubenswrapper[4680]: I0126 16:08:39.185055 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 16:08:39 crc kubenswrapper[4680]: W0126 16:08:39.197347 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod523759f0_efd9_4341_8bc5_7482bd6fd6b2.slice/crio-eccada66aaeaee58dce0df55d6e638260d43e1ac1b1c58410991151198a79cc6 WatchSource:0}: Error finding container eccada66aaeaee58dce0df55d6e638260d43e1ac1b1c58410991151198a79cc6: Status 404 returned error can't find the container with id eccada66aaeaee58dce0df55d6e638260d43e1ac1b1c58410991151198a79cc6 Jan 26 16:08:39 crc kubenswrapper[4680]: I0126 16:08:39.830433 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"523759f0-efd9-4341-8bc5-7482bd6fd6b2","Type":"ContainerStarted","Data":"eccada66aaeaee58dce0df55d6e638260d43e1ac1b1c58410991151198a79cc6"} Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.072915 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.128993 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/744faecd-2eb7-4756-8be3-b532feb69b24-kube-api-access\") pod \"744faecd-2eb7-4756-8be3-b532feb69b24\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.129168 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/744faecd-2eb7-4756-8be3-b532feb69b24-kubelet-dir\") pod \"744faecd-2eb7-4756-8be3-b532feb69b24\" (UID: \"744faecd-2eb7-4756-8be3-b532feb69b24\") " Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.129285 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/744faecd-2eb7-4756-8be3-b532feb69b24-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "744faecd-2eb7-4756-8be3-b532feb69b24" (UID: "744faecd-2eb7-4756-8be3-b532feb69b24"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.129568 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/744faecd-2eb7-4756-8be3-b532feb69b24-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.134095 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744faecd-2eb7-4756-8be3-b532feb69b24-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "744faecd-2eb7-4756-8be3-b532feb69b24" (UID: "744faecd-2eb7-4756-8be3-b532feb69b24"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.231308 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/744faecd-2eb7-4756-8be3-b532feb69b24-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.836165 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"744faecd-2eb7-4756-8be3-b532feb69b24","Type":"ContainerDied","Data":"2450be129bb49ab5f6f572e7b8e6b3e4c64c79aa0f003830b17f85c47817a193"} Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.836207 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2450be129bb49ab5f6f572e7b8e6b3e4c64c79aa0f003830b17f85c47817a193" Jan 26 16:08:40 crc kubenswrapper[4680]: I0126 16:08:40.836262 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 16:08:42 crc kubenswrapper[4680]: I0126 16:08:42.848970 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerStarted","Data":"ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0"} Jan 26 16:08:42 crc kubenswrapper[4680]: I0126 16:08:42.851741 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"523759f0-efd9-4341-8bc5-7482bd6fd6b2","Type":"ContainerStarted","Data":"0df4dec7b0d9ebf640b789e432368f522fbf35f6ba5229f6fb8d41d7bbed2fae"} Jan 26 16:08:42 crc kubenswrapper[4680]: I0126 16:08:42.868650 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s74p8" podStartSLOduration=4.901971936 podStartE2EDuration="49.868631842s" podCreationTimestamp="2026-01-26 16:07:53 +0000 UTC" firstStartedPulling="2026-01-26 16:07:57.366954078 +0000 UTC m=+152.528226347" lastFinishedPulling="2026-01-26 16:08:42.333613984 +0000 UTC m=+197.494886253" observedRunningTime="2026-01-26 16:08:42.865451025 +0000 UTC m=+198.026723294" watchObservedRunningTime="2026-01-26 16:08:42.868631842 +0000 UTC m=+198.029904111" Jan 26 16:08:42 crc kubenswrapper[4680]: I0126 16:08:42.883682 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.8836652560000005 podStartE2EDuration="4.883665256s" podCreationTimestamp="2026-01-26 16:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:08:42.879560763 +0000 UTC m=+198.040833042" watchObservedRunningTime="2026-01-26 16:08:42.883665256 +0000 UTC m=+198.044937525" Jan 26 16:08:44 crc kubenswrapper[4680]: I0126 16:08:44.088730 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:08:44 crc kubenswrapper[4680]: I0126 16:08:44.089124 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:08:45 crc kubenswrapper[4680]: I0126 16:08:45.160482 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s74p8" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="registry-server" probeResult="failure" output=< Jan 26 16:08:45 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:08:45 crc kubenswrapper[4680]: > Jan 26 16:08:46 crc kubenswrapper[4680]: I0126 16:08:46.981224 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:08:46 crc kubenswrapper[4680]: I0126 16:08:46.982208 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:08:46 crc kubenswrapper[4680]: I0126 16:08:46.982368 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:08:46 crc kubenswrapper[4680]: I0126 16:08:46.983084 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:08:46 crc kubenswrapper[4680]: I0126 16:08:46.983300 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47" gracePeriod=600 Jan 26 16:08:49 crc kubenswrapper[4680]: I0126 16:08:49.886743 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47" exitCode=0 Jan 26 16:08:49 crc kubenswrapper[4680]: I0126 16:08:49.886836 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47"} Jan 26 16:08:53 crc kubenswrapper[4680]: I0126 16:08:53.917624 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"4d7aa8956dd19c2869b6fc368d57ebe2297b26fb6365b63a13635d09cdc7a2f9"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.142328 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.184059 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.932636 4680 generic.go:334] "Generic (PLEG): container finished" podID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerID="c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.932766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnkg" event={"ID":"692a260c-34fe-45b3-8ee0-1f438a630beb","Type":"ContainerDied","Data":"c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.937794 4680 generic.go:334] "Generic (PLEG): container finished" podID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerID="614e9e02b6e1710159f98a7814980e34e6ae4b58046b0ed4f1eec70c50f40977" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.937964 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzrq7" event={"ID":"13ea8ffe-97fe-4168-81e7-4816da782f9a","Type":"ContainerDied","Data":"614e9e02b6e1710159f98a7814980e34e6ae4b58046b0ed4f1eec70c50f40977"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.941311 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerID="eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.941400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppntp" event={"ID":"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1","Type":"ContainerDied","Data":"eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.946509 4680 generic.go:334] "Generic (PLEG): container finished" podID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerID="88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.946578 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4njsq" event={"ID":"1468cde4-a721-4295-8f4c-e81d2d68a843","Type":"ContainerDied","Data":"88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.951320 4680 generic.go:334] "Generic (PLEG): container finished" podID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerID="c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.951381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hlx8" event={"ID":"4518f8bc-7ce9-40ee-8b35-263609e549aa","Type":"ContainerDied","Data":"c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.956498 4680 generic.go:334] "Generic (PLEG): container finished" podID="a25355b2-4808-4605-a4a7-b51d677ad232" containerID="8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.956579 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zsc" event={"ID":"a25355b2-4808-4605-a4a7-b51d677ad232","Type":"ContainerDied","Data":"8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696"} Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.972389 4680 generic.go:334] "Generic (PLEG): container finished" podID="2267683a-dbc9-4689-8529-15afc7b2df37" containerID="19870c339c7a0b261cfb9c15b19b4a759f822bc029569572300a984838d6392a" exitCode=0 Jan 26 16:08:54 crc kubenswrapper[4680]: I0126 16:08:54.972441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q4mf" event={"ID":"2267683a-dbc9-4689-8529-15afc7b2df37","Type":"ContainerDied","Data":"19870c339c7a0b261cfb9c15b19b4a759f822bc029569572300a984838d6392a"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.979883 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnkg" event={"ID":"692a260c-34fe-45b3-8ee0-1f438a630beb","Type":"ContainerStarted","Data":"78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.982089 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzrq7" event={"ID":"13ea8ffe-97fe-4168-81e7-4816da782f9a","Type":"ContainerStarted","Data":"961479f6f413a055c77d4f5b854a83898d7ea7feff26cc075872dcd68f1efb1b"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.983968 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppntp" event={"ID":"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1","Type":"ContainerStarted","Data":"b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.989852 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4njsq" event={"ID":"1468cde4-a721-4295-8f4c-e81d2d68a843","Type":"ContainerStarted","Data":"2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.991771 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hlx8" event={"ID":"4518f8bc-7ce9-40ee-8b35-263609e549aa","Type":"ContainerStarted","Data":"00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.994169 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zsc" event={"ID":"a25355b2-4808-4605-a4a7-b51d677ad232","Type":"ContainerStarted","Data":"fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322"} Jan 26 16:08:55 crc kubenswrapper[4680]: I0126 16:08:55.997052 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q4mf" event={"ID":"2267683a-dbc9-4689-8529-15afc7b2df37","Type":"ContainerStarted","Data":"3d88b6201e7c16d92723075e8340df7c8a7ccb1fdc1f057467d8432a74c4e238"} Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.035114 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-76zsc" podStartSLOduration=3.7840389930000002 podStartE2EDuration="1m4.035099119s" podCreationTimestamp="2026-01-26 16:07:52 +0000 UTC" firstStartedPulling="2026-01-26 16:07:55.149179126 +0000 UTC m=+150.310451395" lastFinishedPulling="2026-01-26 16:08:55.400239242 +0000 UTC m=+210.561511521" observedRunningTime="2026-01-26 16:08:56.033549266 +0000 UTC m=+211.194821525" watchObservedRunningTime="2026-01-26 16:08:56.035099119 +0000 UTC m=+211.196371388" Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.035576 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsnkg" podStartSLOduration=3.397334771 podStartE2EDuration="1m6.035572252s" podCreationTimestamp="2026-01-26 16:07:50 +0000 UTC" firstStartedPulling="2026-01-26 16:07:52.780189852 +0000 UTC m=+147.941462121" lastFinishedPulling="2026-01-26 16:08:55.418427333 +0000 UTC m=+210.579699602" observedRunningTime="2026-01-26 16:08:56.010957984 +0000 UTC m=+211.172230253" watchObservedRunningTime="2026-01-26 16:08:56.035572252 +0000 UTC m=+211.196844521" Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.050645 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ppntp" podStartSLOduration=4.747707242 podStartE2EDuration="1m3.050635177s" podCreationTimestamp="2026-01-26 16:07:53 +0000 UTC" firstStartedPulling="2026-01-26 16:07:57.184784175 +0000 UTC m=+152.346056444" lastFinishedPulling="2026-01-26 16:08:55.48771211 +0000 UTC m=+210.648984379" observedRunningTime="2026-01-26 16:08:56.047784558 +0000 UTC m=+211.209056827" watchObservedRunningTime="2026-01-26 16:08:56.050635177 +0000 UTC m=+211.211907446" Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.066539 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4njsq" podStartSLOduration=4.673270742 podStartE2EDuration="1m3.066520474s" podCreationTimestamp="2026-01-26 16:07:53 +0000 UTC" firstStartedPulling="2026-01-26 16:07:57.191960832 +0000 UTC m=+152.353233101" lastFinishedPulling="2026-01-26 16:08:55.585210564 +0000 UTC m=+210.746482833" observedRunningTime="2026-01-26 16:08:56.06310704 +0000 UTC m=+211.224379309" watchObservedRunningTime="2026-01-26 16:08:56.066520474 +0000 UTC m=+211.227792743" Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.081702 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hlx8" podStartSLOduration=3.330163344 podStartE2EDuration="1m6.081682882s" podCreationTimestamp="2026-01-26 16:07:50 +0000 UTC" firstStartedPulling="2026-01-26 16:07:52.773622922 +0000 UTC m=+147.934895191" lastFinishedPulling="2026-01-26 16:08:55.52514245 +0000 UTC m=+210.686414729" observedRunningTime="2026-01-26 16:08:56.080991532 +0000 UTC m=+211.242263801" watchObservedRunningTime="2026-01-26 16:08:56.081682882 +0000 UTC m=+211.242955151" Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.124431 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tzrq7" podStartSLOduration=2.307759597 podStartE2EDuration="1m5.124411228s" podCreationTimestamp="2026-01-26 16:07:51 +0000 UTC" firstStartedPulling="2026-01-26 16:07:52.763465263 +0000 UTC m=+147.924737532" lastFinishedPulling="2026-01-26 16:08:55.580116894 +0000 UTC m=+210.741389163" observedRunningTime="2026-01-26 16:08:56.121705883 +0000 UTC m=+211.282978152" watchObservedRunningTime="2026-01-26 16:08:56.124411228 +0000 UTC m=+211.285683497" Jan 26 16:08:56 crc kubenswrapper[4680]: I0126 16:08:56.124729 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7q4mf" podStartSLOduration=3.386802561 podStartE2EDuration="1m6.124722586s" podCreationTimestamp="2026-01-26 16:07:50 +0000 UTC" firstStartedPulling="2026-01-26 16:07:52.774257969 +0000 UTC m=+147.935530228" lastFinishedPulling="2026-01-26 16:08:55.512177984 +0000 UTC m=+210.673450253" observedRunningTime="2026-01-26 16:08:56.104342625 +0000 UTC m=+211.265614894" watchObservedRunningTime="2026-01-26 16:08:56.124722586 +0000 UTC m=+211.285994855" Jan 26 16:09:00 crc kubenswrapper[4680]: I0126 16:09:00.852053 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:09:00 crc kubenswrapper[4680]: I0126 16:09:00.852343 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:09:00 crc kubenswrapper[4680]: I0126 16:09:00.900842 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.051371 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.051443 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.068134 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.158142 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.253814 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.253875 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.288320 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.490405 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.490501 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:09:01 crc kubenswrapper[4680]: I0126 16:09:01.565173 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:09:02 crc kubenswrapper[4680]: I0126 16:09:02.064434 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:09:02 crc kubenswrapper[4680]: I0126 16:09:02.066171 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:09:02 crc kubenswrapper[4680]: I0126 16:09:02.071808 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:09:02 crc kubenswrapper[4680]: I0126 16:09:02.800954 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ndf74"] Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.328159 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.328431 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.365049 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.422797 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7q4mf"] Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.621545 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzrq7"] Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.676387 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.676469 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:09:03 crc kubenswrapper[4680]: I0126 16:09:03.712080 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.037494 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tzrq7" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="registry-server" containerID="cri-o://961479f6f413a055c77d4f5b854a83898d7ea7feff26cc075872dcd68f1efb1b" gracePeriod=2 Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.037864 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7q4mf" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="registry-server" containerID="cri-o://3d88b6201e7c16d92723075e8340df7c8a7ccb1fdc1f057467d8432a74c4e238" gracePeriod=2 Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.093051 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.096640 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.333991 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.334041 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:09:04 crc kubenswrapper[4680]: I0126 16:09:04.391224 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.046001 4680 generic.go:334] "Generic (PLEG): container finished" podID="2267683a-dbc9-4689-8529-15afc7b2df37" containerID="3d88b6201e7c16d92723075e8340df7c8a7ccb1fdc1f057467d8432a74c4e238" exitCode=0 Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.046099 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q4mf" event={"ID":"2267683a-dbc9-4689-8529-15afc7b2df37","Type":"ContainerDied","Data":"3d88b6201e7c16d92723075e8340df7c8a7ccb1fdc1f057467d8432a74c4e238"} Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.049372 4680 generic.go:334] "Generic (PLEG): container finished" podID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerID="961479f6f413a055c77d4f5b854a83898d7ea7feff26cc075872dcd68f1efb1b" exitCode=0 Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.049489 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzrq7" event={"ID":"13ea8ffe-97fe-4168-81e7-4816da782f9a","Type":"ContainerDied","Data":"961479f6f413a055c77d4f5b854a83898d7ea7feff26cc075872dcd68f1efb1b"} Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.106133 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.597808 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.653334 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.691025 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-utilities\") pod \"2267683a-dbc9-4689-8529-15afc7b2df37\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.691096 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-utilities\") pod \"13ea8ffe-97fe-4168-81e7-4816da782f9a\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.691130 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-catalog-content\") pod \"2267683a-dbc9-4689-8529-15afc7b2df37\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.691187 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgmzs\" (UniqueName: \"kubernetes.io/projected/13ea8ffe-97fe-4168-81e7-4816da782f9a-kube-api-access-hgmzs\") pod \"13ea8ffe-97fe-4168-81e7-4816da782f9a\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.691812 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-utilities" (OuterVolumeSpecName: "utilities") pod "13ea8ffe-97fe-4168-81e7-4816da782f9a" (UID: "13ea8ffe-97fe-4168-81e7-4816da782f9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.691922 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-utilities" (OuterVolumeSpecName: "utilities") pod "2267683a-dbc9-4689-8529-15afc7b2df37" (UID: "2267683a-dbc9-4689-8529-15afc7b2df37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.692540 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrg9b\" (UniqueName: \"kubernetes.io/projected/2267683a-dbc9-4689-8529-15afc7b2df37-kube-api-access-lrg9b\") pod \"2267683a-dbc9-4689-8529-15afc7b2df37\" (UID: \"2267683a-dbc9-4689-8529-15afc7b2df37\") " Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.692592 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-catalog-content\") pod \"13ea8ffe-97fe-4168-81e7-4816da782f9a\" (UID: \"13ea8ffe-97fe-4168-81e7-4816da782f9a\") " Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.692947 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.692965 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.699981 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2267683a-dbc9-4689-8529-15afc7b2df37-kube-api-access-lrg9b" (OuterVolumeSpecName: "kube-api-access-lrg9b") pod "2267683a-dbc9-4689-8529-15afc7b2df37" (UID: "2267683a-dbc9-4689-8529-15afc7b2df37"). InnerVolumeSpecName "kube-api-access-lrg9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.705278 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ea8ffe-97fe-4168-81e7-4816da782f9a-kube-api-access-hgmzs" (OuterVolumeSpecName: "kube-api-access-hgmzs") pod "13ea8ffe-97fe-4168-81e7-4816da782f9a" (UID: "13ea8ffe-97fe-4168-81e7-4816da782f9a"). InnerVolumeSpecName "kube-api-access-hgmzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.742720 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13ea8ffe-97fe-4168-81e7-4816da782f9a" (UID: "13ea8ffe-97fe-4168-81e7-4816da782f9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.745080 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2267683a-dbc9-4689-8529-15afc7b2df37" (UID: "2267683a-dbc9-4689-8529-15afc7b2df37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.793958 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgmzs\" (UniqueName: \"kubernetes.io/projected/13ea8ffe-97fe-4168-81e7-4816da782f9a-kube-api-access-hgmzs\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.793988 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrg9b\" (UniqueName: \"kubernetes.io/projected/2267683a-dbc9-4689-8529-15afc7b2df37-kube-api-access-lrg9b\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.793998 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ea8ffe-97fe-4168-81e7-4816da782f9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.794006 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2267683a-dbc9-4689-8529-15afc7b2df37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:05 crc kubenswrapper[4680]: I0126 16:09:05.824117 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppntp"] Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.061161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzrq7" event={"ID":"13ea8ffe-97fe-4168-81e7-4816da782f9a","Type":"ContainerDied","Data":"178bae32ea9e4d4b51509a26a55ee50921b72ffa8347dcbc0ce7b8e066bb7555"} Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.061216 4680 scope.go:117] "RemoveContainer" containerID="961479f6f413a055c77d4f5b854a83898d7ea7feff26cc075872dcd68f1efb1b" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.061316 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzrq7" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.071386 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q4mf" event={"ID":"2267683a-dbc9-4689-8529-15afc7b2df37","Type":"ContainerDied","Data":"d2a2a9f9ba8ff4832c1f1b8a6b2f1072ee79ec138da570c51ad8647d6adeb07b"} Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.071473 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ppntp" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="registry-server" containerID="cri-o://b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84" gracePeriod=2 Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.071591 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q4mf" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.116151 4680 scope.go:117] "RemoveContainer" containerID="614e9e02b6e1710159f98a7814980e34e6ae4b58046b0ed4f1eec70c50f40977" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.134058 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzrq7"] Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.143482 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tzrq7"] Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.149054 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7q4mf"] Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.151418 4680 scope.go:117] "RemoveContainer" containerID="48539145cd50102d8b769a1fc4543e819953175efcbcce273e720346b063090a" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.153725 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7q4mf"] Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.168558 4680 scope.go:117] "RemoveContainer" containerID="3d88b6201e7c16d92723075e8340df7c8a7ccb1fdc1f057467d8432a74c4e238" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.182333 4680 scope.go:117] "RemoveContainer" containerID="19870c339c7a0b261cfb9c15b19b4a759f822bc029569572300a984838d6392a" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.201820 4680 scope.go:117] "RemoveContainer" containerID="435546af1eefe45fd83910c4c446f4ed9250a8d3a8d7787ee61b5837625e2253" Jan 26 16:09:06 crc kubenswrapper[4680]: I0126 16:09:06.965730 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.009500 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-utilities\") pod \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.009536 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xx8\" (UniqueName: \"kubernetes.io/projected/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-kube-api-access-62xx8\") pod \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.009574 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-catalog-content\") pod \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\" (UID: \"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1\") " Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.010544 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-utilities" (OuterVolumeSpecName: "utilities") pod "ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" (UID: "ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.020299 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-kube-api-access-62xx8" (OuterVolumeSpecName: "kube-api-access-62xx8") pod "ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" (UID: "ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1"). InnerVolumeSpecName "kube-api-access-62xx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.036649 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" (UID: "ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.080385 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerID="b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84" exitCode=0 Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.080743 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppntp" event={"ID":"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1","Type":"ContainerDied","Data":"b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84"} Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.080771 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppntp" event={"ID":"ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1","Type":"ContainerDied","Data":"9bac44c193cc1b02f6afca80387810cc17d411d5020ebd352df7d100fc7140cf"} Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.080791 4680 scope.go:117] "RemoveContainer" containerID="b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.080909 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppntp" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.104398 4680 scope.go:117] "RemoveContainer" containerID="eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.110642 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.110674 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62xx8\" (UniqueName: \"kubernetes.io/projected/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-kube-api-access-62xx8\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.110684 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.115773 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppntp"] Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.119415 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppntp"] Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.124164 4680 scope.go:117] "RemoveContainer" containerID="fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.146816 4680 scope.go:117] "RemoveContainer" containerID="b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84" Jan 26 16:09:07 crc kubenswrapper[4680]: E0126 16:09:07.147322 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84\": container with ID starting with b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84 not found: ID does not exist" containerID="b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.147371 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84"} err="failed to get container status \"b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84\": rpc error: code = NotFound desc = could not find container \"b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84\": container with ID starting with b61bee4018763ce2bc17c26c4308e72f26b2b3b7b81db312d91d0c78d2078e84 not found: ID does not exist" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.147403 4680 scope.go:117] "RemoveContainer" containerID="eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01" Jan 26 16:09:07 crc kubenswrapper[4680]: E0126 16:09:07.147798 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01\": container with ID starting with eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01 not found: ID does not exist" containerID="eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.147828 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01"} err="failed to get container status \"eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01\": rpc error: code = NotFound desc = could not find container \"eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01\": container with ID starting with eff2ac3807f8e753af6e999c6ff04ef354faf391c02df2b4d2b4e6dd4874cf01 not found: ID does not exist" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.147847 4680 scope.go:117] "RemoveContainer" containerID="fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75" Jan 26 16:09:07 crc kubenswrapper[4680]: E0126 16:09:07.148162 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75\": container with ID starting with fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75 not found: ID does not exist" containerID="fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.148202 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75"} err="failed to get container status \"fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75\": rpc error: code = NotFound desc = could not find container \"fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75\": container with ID starting with fbb43bedf8c8b886c2d33c745e164d76f2e3b841049fe15c6e2245e352463d75 not found: ID does not exist" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.176085 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" path="/var/lib/kubelet/pods/13ea8ffe-97fe-4168-81e7-4816da782f9a/volumes" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.177187 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" path="/var/lib/kubelet/pods/2267683a-dbc9-4689-8529-15afc7b2df37/volumes" Jan 26 16:09:07 crc kubenswrapper[4680]: I0126 16:09:07.177843 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" path="/var/lib/kubelet/pods/ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1/volumes" Jan 26 16:09:08 crc kubenswrapper[4680]: I0126 16:09:08.223415 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4njsq"] Jan 26 16:09:08 crc kubenswrapper[4680]: I0126 16:09:08.223607 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4njsq" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="registry-server" containerID="cri-o://2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906" gracePeriod=2 Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.083775 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.092337 4680 generic.go:334] "Generic (PLEG): container finished" podID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerID="2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906" exitCode=0 Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.092388 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4njsq" event={"ID":"1468cde4-a721-4295-8f4c-e81d2d68a843","Type":"ContainerDied","Data":"2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906"} Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.092405 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4njsq" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.092420 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4njsq" event={"ID":"1468cde4-a721-4295-8f4c-e81d2d68a843","Type":"ContainerDied","Data":"40a527abd5937bb098870a331009715fb66a680fed6aea60729292ecee8d3db4"} Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.092453 4680 scope.go:117] "RemoveContainer" containerID="2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.105904 4680 scope.go:117] "RemoveContainer" containerID="88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.134570 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-utilities\") pod \"1468cde4-a721-4295-8f4c-e81d2d68a843\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.134618 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-catalog-content\") pod \"1468cde4-a721-4295-8f4c-e81d2d68a843\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.134663 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bmvf\" (UniqueName: \"kubernetes.io/projected/1468cde4-a721-4295-8f4c-e81d2d68a843-kube-api-access-6bmvf\") pod \"1468cde4-a721-4295-8f4c-e81d2d68a843\" (UID: \"1468cde4-a721-4295-8f4c-e81d2d68a843\") " Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.136176 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-utilities" (OuterVolumeSpecName: "utilities") pod "1468cde4-a721-4295-8f4c-e81d2d68a843" (UID: "1468cde4-a721-4295-8f4c-e81d2d68a843"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.141158 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1468cde4-a721-4295-8f4c-e81d2d68a843-kube-api-access-6bmvf" (OuterVolumeSpecName: "kube-api-access-6bmvf") pod "1468cde4-a721-4295-8f4c-e81d2d68a843" (UID: "1468cde4-a721-4295-8f4c-e81d2d68a843"). InnerVolumeSpecName "kube-api-access-6bmvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.150571 4680 scope.go:117] "RemoveContainer" containerID="6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.167722 4680 scope.go:117] "RemoveContainer" containerID="2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906" Jan 26 16:09:09 crc kubenswrapper[4680]: E0126 16:09:09.169340 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906\": container with ID starting with 2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906 not found: ID does not exist" containerID="2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.169383 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906"} err="failed to get container status \"2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906\": rpc error: code = NotFound desc = could not find container \"2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906\": container with ID starting with 2bc08b236e8de9480cd0a63ab6c5d46f44d9af9101b0d8d49c2ecc88ab480906 not found: ID does not exist" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.169409 4680 scope.go:117] "RemoveContainer" containerID="88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6" Jan 26 16:09:09 crc kubenswrapper[4680]: E0126 16:09:09.174621 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6\": container with ID starting with 88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6 not found: ID does not exist" containerID="88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.174669 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6"} err="failed to get container status \"88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6\": rpc error: code = NotFound desc = could not find container \"88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6\": container with ID starting with 88130e18e9ba9f099dc2654ef0997f51b18598d72d8568076f2f611cf02322d6 not found: ID does not exist" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.174694 4680 scope.go:117] "RemoveContainer" containerID="6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2" Jan 26 16:09:09 crc kubenswrapper[4680]: E0126 16:09:09.177574 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2\": container with ID starting with 6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2 not found: ID does not exist" containerID="6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.177604 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2"} err="failed to get container status \"6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2\": rpc error: code = NotFound desc = could not find container \"6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2\": container with ID starting with 6a7929aec2f38348d0d9d199b9eff677adef496f1c53c3a177e6932f0bd361d2 not found: ID does not exist" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.235858 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.235896 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bmvf\" (UniqueName: \"kubernetes.io/projected/1468cde4-a721-4295-8f4c-e81d2d68a843-kube-api-access-6bmvf\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.315658 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1468cde4-a721-4295-8f4c-e81d2d68a843" (UID: "1468cde4-a721-4295-8f4c-e81d2d68a843"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.337355 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1468cde4-a721-4295-8f4c-e81d2d68a843-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.416008 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4njsq"] Jan 26 16:09:09 crc kubenswrapper[4680]: I0126 16:09:09.423425 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4njsq"] Jan 26 16:09:11 crc kubenswrapper[4680]: I0126 16:09:11.177348 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" path="/var/lib/kubelet/pods/1468cde4-a721-4295-8f4c-e81d2d68a843/volumes" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.445852 4680 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446616 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446630 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446642 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446650 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446662 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446670 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446690 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446699 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446708 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446714 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446727 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446733 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446743 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446750 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446761 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446768 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446776 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446782 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="extract-content" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446793 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446801 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446812 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446819 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="extract-utilities" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446828 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744faecd-2eb7-4756-8be3-b532feb69b24" containerName="pruner" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446837 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="744faecd-2eb7-4756-8be3-b532feb69b24" containerName="pruner" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.446846 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.446855 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447003 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="744faecd-2eb7-4756-8be3-b532feb69b24" containerName="pruner" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447014 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1468cde4-a721-4295-8f4c-e81d2d68a843" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447022 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2267683a-dbc9-4689-8529-15afc7b2df37" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447031 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef753eab-7bb6-4da0-a1ff-e6f8ed635cd1" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447042 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ea8ffe-97fe-4168-81e7-4816da782f9a" containerName="registry-server" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447509 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.447737 4680 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.448160 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a" gracePeriod=15 Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.448263 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f" gracePeriod=15 Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.448257 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da" gracePeriod=15 Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.448381 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424" gracePeriod=15 Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.448613 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230" gracePeriod=15 Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453171 4680 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.453643 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453672 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.453697 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453712 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.453735 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453750 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.453774 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453788 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.453825 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453845 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.453864 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.453879 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.454157 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.454191 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.454217 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.454244 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.454267 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.515057 4680 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.565704 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.565941 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.565977 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.566010 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.566049 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.566094 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.566120 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.566147 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.666987 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667027 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667052 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667096 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667125 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667144 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667162 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667183 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667199 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667244 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667285 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667290 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667305 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.667326 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: I0126 16:09:19.815862 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:19 crc kubenswrapper[4680]: W0126 16:09:19.833476 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ba2634a61702f2739c878e53806461637cadda7f087610fb77b3964a186e2925 WatchSource:0}: Error finding container ba2634a61702f2739c878e53806461637cadda7f087610fb77b3964a186e2925: Status 404 returned error can't find the container with id ba2634a61702f2739c878e53806461637cadda7f087610fb77b3964a186e2925 Jan 26 16:09:19 crc kubenswrapper[4680]: E0126 16:09:19.836599 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e53b983096d34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:09:19.83567186 +0000 UTC m=+234.996944129,LastTimestamp:2026-01-26 16:09:19.83567186 +0000 UTC m=+234.996944129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.151005 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.151666 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da" exitCode=0 Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.151692 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424" exitCode=0 Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.151700 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f" exitCode=0 Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.151708 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230" exitCode=2 Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.152894 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8e16112d386568fea6786e059d6ec9045526971187bd816863cd3d741b8d3afb"} Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.152922 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ba2634a61702f2739c878e53806461637cadda7f087610fb77b3964a186e2925"} Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.153640 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:20 crc kubenswrapper[4680]: E0126 16:09:20.153757 4680 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.154910 4680 generic.go:334] "Generic (PLEG): container finished" podID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" containerID="0df4dec7b0d9ebf640b789e432368f522fbf35f6ba5229f6fb8d41d7bbed2fae" exitCode=0 Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.154942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"523759f0-efd9-4341-8bc5-7482bd6fd6b2","Type":"ContainerDied","Data":"0df4dec7b0d9ebf640b789e432368f522fbf35f6ba5229f6fb8d41d7bbed2fae"} Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.155266 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:20 crc kubenswrapper[4680]: I0126 16:09:20.155452 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.383808 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.384651 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.496534 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-var-lock\") pod \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.496586 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kubelet-dir\") pod \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.496637 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kube-api-access\") pod \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\" (UID: \"523759f0-efd9-4341-8bc5-7482bd6fd6b2\") " Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.497517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-var-lock" (OuterVolumeSpecName: "var-lock") pod "523759f0-efd9-4341-8bc5-7482bd6fd6b2" (UID: "523759f0-efd9-4341-8bc5-7482bd6fd6b2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.497580 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "523759f0-efd9-4341-8bc5-7482bd6fd6b2" (UID: "523759f0-efd9-4341-8bc5-7482bd6fd6b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.512396 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "523759f0-efd9-4341-8bc5-7482bd6fd6b2" (UID: "523759f0-efd9-4341-8bc5-7482bd6fd6b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.599641 4680 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.599674 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.599683 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523759f0-efd9-4341-8bc5-7482bd6fd6b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.818681 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.819750 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.820337 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:21 crc kubenswrapper[4680]: I0126 16:09:21.820779 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.004603 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.004700 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005117 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005178 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005221 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005250 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005589 4680 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005629 4680 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.005647 4680 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.171628 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.173256 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a" exitCode=0 Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.173395 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.173395 4680 scope.go:117] "RemoveContainer" containerID="ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.178119 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"523759f0-efd9-4341-8bc5-7482bd6fd6b2","Type":"ContainerDied","Data":"eccada66aaeaee58dce0df55d6e638260d43e1ac1b1c58410991151198a79cc6"} Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.178173 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccada66aaeaee58dce0df55d6e638260d43e1ac1b1c58410991151198a79cc6" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.178191 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.202791 4680 scope.go:117] "RemoveContainer" containerID="7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.203834 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.204396 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.222948 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.223445 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.228976 4680 scope.go:117] "RemoveContainer" containerID="9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.257143 4680 scope.go:117] "RemoveContainer" containerID="29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.275442 4680 scope.go:117] "RemoveContainer" containerID="caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.292953 4680 scope.go:117] "RemoveContainer" containerID="5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.325573 4680 scope.go:117] "RemoveContainer" containerID="ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da" Jan 26 16:09:22 crc kubenswrapper[4680]: E0126 16:09:22.327674 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\": container with ID starting with ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da not found: ID does not exist" containerID="ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.327735 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da"} err="failed to get container status \"ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\": rpc error: code = NotFound desc = could not find container \"ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da\": container with ID starting with ac052c1d8586b05ec6969515a3a57b92872df9f40c86d4ed267a314333ed45da not found: ID does not exist" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.327774 4680 scope.go:117] "RemoveContainer" containerID="7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424" Jan 26 16:09:22 crc kubenswrapper[4680]: E0126 16:09:22.328176 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\": container with ID starting with 7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424 not found: ID does not exist" containerID="7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.328208 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424"} err="failed to get container status \"7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\": rpc error: code = NotFound desc = could not find container \"7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424\": container with ID starting with 7857ea89a0816a0c295c33fc4d42052d6cc4b9ad51fddd6791cf451c8c85f424 not found: ID does not exist" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.328236 4680 scope.go:117] "RemoveContainer" containerID="9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f" Jan 26 16:09:22 crc kubenswrapper[4680]: E0126 16:09:22.328567 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\": container with ID starting with 9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f not found: ID does not exist" containerID="9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.328596 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f"} err="failed to get container status \"9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\": rpc error: code = NotFound desc = could not find container \"9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f\": container with ID starting with 9334cf2cee72f2694482769b4e49c940cf027ff272d84e523a97b09bc753bc0f not found: ID does not exist" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.328628 4680 scope.go:117] "RemoveContainer" containerID="29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230" Jan 26 16:09:22 crc kubenswrapper[4680]: E0126 16:09:22.329162 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\": container with ID starting with 29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230 not found: ID does not exist" containerID="29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.329196 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230"} err="failed to get container status \"29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\": rpc error: code = NotFound desc = could not find container \"29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230\": container with ID starting with 29a3625a8780e8846cb0b9f07e36e43e927b06d7010fa84362988bafe8507230 not found: ID does not exist" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.329219 4680 scope.go:117] "RemoveContainer" containerID="caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a" Jan 26 16:09:22 crc kubenswrapper[4680]: E0126 16:09:22.329847 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\": container with ID starting with caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a not found: ID does not exist" containerID="caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.329891 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a"} err="failed to get container status \"caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\": rpc error: code = NotFound desc = could not find container \"caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a\": container with ID starting with caaa71cfcb5b8ec42543cef42e9505299abf511172f9271731ed11573ec53a8a not found: ID does not exist" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.329918 4680 scope.go:117] "RemoveContainer" containerID="5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c" Jan 26 16:09:22 crc kubenswrapper[4680]: E0126 16:09:22.331507 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\": container with ID starting with 5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c not found: ID does not exist" containerID="5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c" Jan 26 16:09:22 crc kubenswrapper[4680]: I0126 16:09:22.331557 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c"} err="failed to get container status \"5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\": rpc error: code = NotFound desc = could not find container \"5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c\": container with ID starting with 5469933b36ef9409b3706241fb0b3715ef48e1ebdf5387cd932d540a2bd71e9c not found: ID does not exist" Jan 26 16:09:23 crc kubenswrapper[4680]: I0126 16:09:23.175517 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 16:09:25 crc kubenswrapper[4680]: E0126 16:09:25.031545 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e53b983096d34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:09:19.83567186 +0000 UTC m=+234.996944129,LastTimestamp:2026-01-26 16:09:19.83567186 +0000 UTC m=+234.996944129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:09:25 crc kubenswrapper[4680]: I0126 16:09:25.178787 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.447212 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.447657 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.448343 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.448995 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.449754 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:26 crc kubenswrapper[4680]: I0126 16:09:26.449813 4680 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.450387 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Jan 26 16:09:26 crc kubenswrapper[4680]: E0126 16:09:26.651374 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Jan 26 16:09:27 crc kubenswrapper[4680]: E0126 16:09:27.052941 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Jan 26 16:09:27 crc kubenswrapper[4680]: E0126 16:09:27.854481 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Jan 26 16:09:27 crc kubenswrapper[4680]: I0126 16:09:27.872939 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" podUID="84e58c16-df02-4857-aba5-434321c87141" containerName="oauth-openshift" containerID="cri-o://55d234ea72af885d7adf006600c4496c2e52aeef06c2d3c43c87ae89f47b6a34" gracePeriod=15 Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.211614 4680 generic.go:334] "Generic (PLEG): container finished" podID="84e58c16-df02-4857-aba5-434321c87141" containerID="55d234ea72af885d7adf006600c4496c2e52aeef06c2d3c43c87ae89f47b6a34" exitCode=0 Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.211938 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" event={"ID":"84e58c16-df02-4857-aba5-434321c87141","Type":"ContainerDied","Data":"55d234ea72af885d7adf006600c4496c2e52aeef06c2d3c43c87ae89f47b6a34"} Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.249110 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.249527 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.249931 4680 status_manager.go:851] "Failed to get status for pod" podUID="84e58c16-df02-4857-aba5-434321c87141" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ndf74\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.290500 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-cliconfig\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.290536 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-audit-policies\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.290576 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-provider-selection\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.290610 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq9v6\" (UniqueName: \"kubernetes.io/projected/84e58c16-df02-4857-aba5-434321c87141-kube-api-access-bq9v6\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.290632 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-router-certs\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.290659 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-login\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291033 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-session\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291135 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-serving-cert\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291233 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-error\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291311 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-service-ca\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291379 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84e58c16-df02-4857-aba5-434321c87141-audit-dir\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291468 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-ocp-branding-template\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291609 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-idp-0-file-data\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291688 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-trusted-ca-bundle\") pod \"84e58c16-df02-4857-aba5-434321c87141\" (UID: \"84e58c16-df02-4857-aba5-434321c87141\") " Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291416 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.291875 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.292456 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.293276 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84e58c16-df02-4857-aba5-434321c87141-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.294034 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.297607 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.297712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e58c16-df02-4857-aba5-434321c87141-kube-api-access-bq9v6" (OuterVolumeSpecName: "kube-api-access-bq9v6") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "kube-api-access-bq9v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.298022 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.298554 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.298578 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.300023 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.300186 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.302570 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.303442 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "84e58c16-df02-4857-aba5-434321c87141" (UID: "84e58c16-df02-4857-aba5-434321c87141"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393242 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393663 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393761 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393784 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393828 4680 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84e58c16-df02-4857-aba5-434321c87141-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393846 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393860 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393873 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393913 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393928 4680 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84e58c16-df02-4857-aba5-434321c87141-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393943 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393956 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq9v6\" (UniqueName: \"kubernetes.io/projected/84e58c16-df02-4857-aba5-434321c87141-kube-api-access-bq9v6\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.393995 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:28 crc kubenswrapper[4680]: I0126 16:09:28.394008 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84e58c16-df02-4857-aba5-434321c87141-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:30 crc kubenswrapper[4680]: E0126 16:09:30.063165 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="3.2s" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.074588 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.075690 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.076017 4680 status_manager.go:851] "Failed to get status for pod" podUID="84e58c16-df02-4857-aba5-434321c87141" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ndf74\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.081768 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.084215 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.085366 4680 status_manager.go:851] "Failed to get status for pod" podUID="84e58c16-df02-4857-aba5-434321c87141" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ndf74\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.100291 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" event={"ID":"84e58c16-df02-4857-aba5-434321c87141","Type":"ContainerDied","Data":"49d5d95c72ca62043f47a17d89b03c442b7f32e504566cca4c0819489a2f9dd5"} Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.100460 4680 scope.go:117] "RemoveContainer" containerID="55d234ea72af885d7adf006600c4496c2e52aeef06c2d3c43c87ae89f47b6a34" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.105803 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.105856 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.106211 4680 status_manager.go:851] "Failed to get status for pod" podUID="84e58c16-df02-4857-aba5-434321c87141" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ndf74\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:30 crc kubenswrapper[4680]: E0126 16:09:30.106441 4680 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.106650 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:30 crc kubenswrapper[4680]: I0126 16:09:30.107204 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.094987 4680 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c8faf1549762923c2549836f6a682e1ccf231a5cc103f4e1aa9eb7a257f71c64" exitCode=0 Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.095337 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c8faf1549762923c2549836f6a682e1ccf231a5cc103f4e1aa9eb7a257f71c64"} Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.095367 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e04f01bcefff87c5ffd465ebd1dd1f80bf45198a691fd238f65eddef37968cc0"} Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.095625 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.095639 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.096251 4680 status_manager.go:851] "Failed to get status for pod" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:31 crc kubenswrapper[4680]: E0126 16:09:31.096333 4680 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:31 crc kubenswrapper[4680]: I0126 16:09:31.096461 4680 status_manager.go:851] "Failed to get status for pod" podUID="84e58c16-df02-4857-aba5-434321c87141" pod="openshift-authentication/oauth-openshift-558db77b4-ndf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-ndf74\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.107715 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.108008 4680 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf" exitCode=1 Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.108061 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf"} Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.108473 4680 scope.go:117] "RemoveContainer" containerID="d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf" Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.113991 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4d8a404e62a9bf36a6f65a45f04c1cd8ff3e55f4aa6bd67bdb0b88d5a39f1ab7"} Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.114037 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2c93407c1ac637715340246368bebe54ffc4d6670558dd9b7d4e8f4a2ca7625a"} Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.114046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f55770cad54b980876ebcfbaf6dd509e20b2a948fd027df988cf34e6daf492cd"} Jan 26 16:09:32 crc kubenswrapper[4680]: I0126 16:09:32.114054 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c44a298d4b4757c66ad2cf96f00e3e9a0d38c6009584233fbd04760948184ac"} Jan 26 16:09:33 crc kubenswrapper[4680]: I0126 16:09:33.120246 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 16:09:33 crc kubenswrapper[4680]: I0126 16:09:33.120569 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de6c2781153effe369be783af6ed293d01fef5bf82ab1cc0850270cef8b4d2d4"} Jan 26 16:09:33 crc kubenswrapper[4680]: I0126 16:09:33.123778 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d956b1ccb4f79b3d1ddc5784b7708e0b2d7b7193303f1e385a5c2ba03397faf6"} Jan 26 16:09:33 crc kubenswrapper[4680]: I0126 16:09:33.124026 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:33 crc kubenswrapper[4680]: I0126 16:09:33.124183 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:33 crc kubenswrapper[4680]: I0126 16:09:33.124261 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:35 crc kubenswrapper[4680]: I0126 16:09:35.108671 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:35 crc kubenswrapper[4680]: I0126 16:09:35.109103 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:35 crc kubenswrapper[4680]: I0126 16:09:35.116894 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:37 crc kubenswrapper[4680]: I0126 16:09:37.916122 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:09:37 crc kubenswrapper[4680]: I0126 16:09:37.920899 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:09:38 crc kubenswrapper[4680]: I0126 16:09:38.137235 4680 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:38 crc kubenswrapper[4680]: I0126 16:09:38.149481 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:09:38 crc kubenswrapper[4680]: I0126 16:09:38.185672 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6fffa97a-5251-48e1-b0ff-7e68636b58b7" Jan 26 16:09:39 crc kubenswrapper[4680]: I0126 16:09:39.154730 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:39 crc kubenswrapper[4680]: I0126 16:09:39.155092 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:39 crc kubenswrapper[4680]: I0126 16:09:39.160451 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6fffa97a-5251-48e1-b0ff-7e68636b58b7" Jan 26 16:09:39 crc kubenswrapper[4680]: I0126 16:09:39.162589 4680 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://3c44a298d4b4757c66ad2cf96f00e3e9a0d38c6009584233fbd04760948184ac" Jan 26 16:09:39 crc kubenswrapper[4680]: I0126 16:09:39.162657 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:40 crc kubenswrapper[4680]: I0126 16:09:40.158614 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:40 crc kubenswrapper[4680]: I0126 16:09:40.158651 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2affa4a5-f8e6-40ca-bf8f-f022bc800dc7" Jan 26 16:09:40 crc kubenswrapper[4680]: I0126 16:09:40.162338 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6fffa97a-5251-48e1-b0ff-7e68636b58b7" Jan 26 16:09:44 crc kubenswrapper[4680]: I0126 16:09:44.753758 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 16:09:44 crc kubenswrapper[4680]: I0126 16:09:44.934138 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 16:09:45 crc kubenswrapper[4680]: I0126 16:09:45.325394 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 16:09:45 crc kubenswrapper[4680]: I0126 16:09:45.611743 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 16:09:45 crc kubenswrapper[4680]: I0126 16:09:45.884798 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 16:09:46 crc kubenswrapper[4680]: I0126 16:09:46.326334 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 16:09:46 crc kubenswrapper[4680]: I0126 16:09:46.438383 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 16:09:47 crc kubenswrapper[4680]: I0126 16:09:47.153813 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 16:09:47 crc kubenswrapper[4680]: I0126 16:09:47.189496 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 16:09:48 crc kubenswrapper[4680]: I0126 16:09:48.198715 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 16:09:48 crc kubenswrapper[4680]: I0126 16:09:48.651670 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 16:09:48 crc kubenswrapper[4680]: I0126 16:09:48.926177 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 16:09:48 crc kubenswrapper[4680]: I0126 16:09:48.991144 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 16:09:49 crc kubenswrapper[4680]: I0126 16:09:49.119300 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 16:09:49 crc kubenswrapper[4680]: I0126 16:09:49.846932 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 16:09:50 crc kubenswrapper[4680]: I0126 16:09:50.438019 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 16:09:50 crc kubenswrapper[4680]: I0126 16:09:50.636853 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 16:09:50 crc kubenswrapper[4680]: I0126 16:09:50.834895 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 16:09:50 crc kubenswrapper[4680]: I0126 16:09:50.844691 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:09:50 crc kubenswrapper[4680]: I0126 16:09:50.945368 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.056879 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.235878 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.360289 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.370303 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.456358 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.541557 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.708000 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 16:09:51 crc kubenswrapper[4680]: I0126 16:09:51.792385 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.012942 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.215879 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.223870 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.251149 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.279615 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.392587 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.413616 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.694807 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.803628 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.819423 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.931557 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 16:09:52 crc kubenswrapper[4680]: I0126 16:09:52.964987 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.070908 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.377932 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.507857 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.530677 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.561513 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.796464 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.816982 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.845607 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 16:09:53 crc kubenswrapper[4680]: I0126 16:09:53.847891 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.026599 4680 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.030294 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ndf74","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.030354 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.034914 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.047155 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.047140021 podStartE2EDuration="16.047140021s" podCreationTimestamp="2026-01-26 16:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:09:54.045292978 +0000 UTC m=+269.206565257" watchObservedRunningTime="2026-01-26 16:09:54.047140021 +0000 UTC m=+269.208412290" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.055834 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.191581 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.213767 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.329711 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.406492 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.713404 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.930686 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.953000 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.970354 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.972422 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 16:09:54 crc kubenswrapper[4680]: I0126 16:09:54.983647 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.106313 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.139794 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.176281 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e58c16-df02-4857-aba5-434321c87141" path="/var/lib/kubelet/pods/84e58c16-df02-4857-aba5-434321c87141/volumes" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.225734 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.263582 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.319278 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.326717 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.385979 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.414970 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.471839 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79656f7ff7-xktrf"] Jan 26 16:09:55 crc kubenswrapper[4680]: E0126 16:09:55.472142 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" containerName="installer" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.472165 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" containerName="installer" Jan 26 16:09:55 crc kubenswrapper[4680]: E0126 16:09:55.472186 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e58c16-df02-4857-aba5-434321c87141" containerName="oauth-openshift" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.472195 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e58c16-df02-4857-aba5-434321c87141" containerName="oauth-openshift" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.472313 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="523759f0-efd9-4341-8bc5-7482bd6fd6b2" containerName="installer" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.472328 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e58c16-df02-4857-aba5-434321c87141" containerName="oauth-openshift" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.472771 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.478987 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.479134 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.479155 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.479926 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.488283 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.488310 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.488571 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.488579 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.488696 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.488807 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.489196 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.494095 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.494329 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.495756 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.496005 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.501210 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.505313 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79656f7ff7-xktrf"] Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.511513 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526386 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526424 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-error\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526443 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526463 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-session\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526483 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526504 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5prj\" (UniqueName: \"kubernetes.io/projected/78169d3a-fe9d-418a-9714-211277755dc8-kube-api-access-j5prj\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526565 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526581 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526597 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526613 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-login\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526627 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526642 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-audit-policies\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.526661 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78169d3a-fe9d-418a-9714-211277755dc8-audit-dir\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.628120 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.628788 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.628865 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-error\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629450 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629480 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-session\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629786 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629816 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5prj\" (UniqueName: \"kubernetes.io/projected/78169d3a-fe9d-418a-9714-211277755dc8-kube-api-access-j5prj\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629854 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629897 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629935 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.629983 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.630003 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-login\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.630018 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.630036 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-audit-policies\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.630058 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78169d3a-fe9d-418a-9714-211277755dc8-audit-dir\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.630126 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78169d3a-fe9d-418a-9714-211277755dc8-audit-dir\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.630562 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.633554 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.633964 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-audit-policies\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.634366 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-session\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.637827 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.644024 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.646649 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.646914 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.647259 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.647683 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-login\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.647885 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.654546 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-user-template-error\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.654942 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5prj\" (UniqueName: \"kubernetes.io/projected/78169d3a-fe9d-418a-9714-211277755dc8-kube-api-access-j5prj\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.655984 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78169d3a-fe9d-418a-9714-211277755dc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79656f7ff7-xktrf\" (UID: \"78169d3a-fe9d-418a-9714-211277755dc8\") " pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.727983 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.791201 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.853575 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.935832 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.937832 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 16:09:55 crc kubenswrapper[4680]: I0126 16:09:55.939374 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.032452 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.194292 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.240617 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79656f7ff7-xktrf"] Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.257620 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.259594 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.353869 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.385913 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.411996 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.425768 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.533391 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.544588 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.582761 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.649461 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.691692 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.708243 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.758623 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.822016 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.922190 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.941575 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 16:09:56 crc kubenswrapper[4680]: I0126 16:09:56.981093 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.102349 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.235098 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.263633 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" event={"ID":"78169d3a-fe9d-418a-9714-211277755dc8","Type":"ContainerStarted","Data":"da0fcc6fdf671926290cb187796d8999e6dc87f490dc1eed31ac278e555a69f1"} Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.265192 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" event={"ID":"78169d3a-fe9d-418a-9714-211277755dc8","Type":"ContainerStarted","Data":"e9663994a4bbe66f732ee3437343c3f3aaf32d258215e4ffb2ff0dd9cb170c87"} Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.265605 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.271296 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.275604 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.294217 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" podStartSLOduration=55.294184912 podStartE2EDuration="55.294184912s" podCreationTimestamp="2026-01-26 16:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:09:57.289406405 +0000 UTC m=+272.450678694" watchObservedRunningTime="2026-01-26 16:09:57.294184912 +0000 UTC m=+272.455457221" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.302230 4680 patch_prober.go:28] interesting pod/oauth-openshift-79656f7ff7-xktrf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:54728->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.302495 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" podUID="78169d3a-fe9d-418a-9714-211277755dc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:54728->10.217.0.56:6443: read: connection reset by peer" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.307413 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.380195 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.397158 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.444439 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.447936 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.500909 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.575480 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.586292 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.634313 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.750493 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.781897 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 16:09:57 crc kubenswrapper[4680]: I0126 16:09:57.794358 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.027993 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.100216 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.170886 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.268727 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-79656f7ff7-xktrf_78169d3a-fe9d-418a-9714-211277755dc8/oauth-openshift/0.log" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.268776 4680 generic.go:334] "Generic (PLEG): container finished" podID="78169d3a-fe9d-418a-9714-211277755dc8" containerID="da0fcc6fdf671926290cb187796d8999e6dc87f490dc1eed31ac278e555a69f1" exitCode=255 Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.268805 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" event={"ID":"78169d3a-fe9d-418a-9714-211277755dc8","Type":"ContainerDied","Data":"da0fcc6fdf671926290cb187796d8999e6dc87f490dc1eed31ac278e555a69f1"} Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.269268 4680 scope.go:117] "RemoveContainer" containerID="da0fcc6fdf671926290cb187796d8999e6dc87f490dc1eed31ac278e555a69f1" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.285424 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.309179 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.384357 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.446877 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.503403 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.521995 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.555174 4680 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.565256 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.585281 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.698777 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.759570 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.782745 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.906382 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.911635 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.955889 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.977802 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 16:09:58 crc kubenswrapper[4680]: I0126 16:09:58.996947 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.057233 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.151172 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.219735 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.280281 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-79656f7ff7-xktrf_78169d3a-fe9d-418a-9714-211277755dc8/oauth-openshift/0.log" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.280341 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" event={"ID":"78169d3a-fe9d-418a-9714-211277755dc8","Type":"ContainerStarted","Data":"5e28caa76757f4048bd4ff28e89dd75676f7164bd6b95988870030c455c2796e"} Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.280726 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.284405 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.350938 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.379350 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.412508 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.513138 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.603767 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.613032 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.614004 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.631856 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.649105 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.666660 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.680004 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.707193 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.789746 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 16:09:59 crc kubenswrapper[4680]: I0126 16:09:59.839992 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.062830 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.158855 4680 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.302327 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.386364 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.392361 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.482598 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.485586 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.498888 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.734716 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.778428 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.796010 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.812555 4680 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.813456 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8e16112d386568fea6786e059d6ec9045526971187bd816863cd3d741b8d3afb" gracePeriod=5 Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.824315 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.829803 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.959142 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.961279 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 16:10:00 crc kubenswrapper[4680]: I0126 16:10:00.997711 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.050558 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.052026 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.108870 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.150957 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.171933 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.384801 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.409910 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.450768 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.484322 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.508506 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.537706 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.610414 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.678291 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.702532 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.901163 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.907154 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.923448 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 16:10:01 crc kubenswrapper[4680]: I0126 16:10:01.970510 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.019568 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.055606 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.091401 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.130969 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.151435 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.230609 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.401386 4680 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.417623 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.717878 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.722236 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.893983 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.913837 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 16:10:02 crc kubenswrapper[4680]: I0126 16:10:02.972681 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.039995 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.082370 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.178770 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.277315 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.292736 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.324089 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.523613 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.746400 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.749334 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 16:10:03 crc kubenswrapper[4680]: I0126 16:10:03.842248 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.004478 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.150739 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.233604 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.250282 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.546755 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.558738 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.633777 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.670549 4680 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.726198 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.765157 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 16:10:04 crc kubenswrapper[4680]: I0126 16:10:04.841899 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.000928 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.019495 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.028939 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.276772 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.277919 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.355670 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.461027 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.525368 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.707300 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.802765 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.812810 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.872598 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.915586 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 16:10:05 crc kubenswrapper[4680]: I0126 16:10:05.998507 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.313658 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.313708 4680 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8e16112d386568fea6786e059d6ec9045526971187bd816863cd3d741b8d3afb" exitCode=137 Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.390184 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.390254 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.479744 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.479813 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.479889 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.479914 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.479935 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.480286 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.480331 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.480352 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.480370 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.488746 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.581426 4680 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.581460 4680 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.581472 4680 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.581482 4680 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:06 crc kubenswrapper[4680]: I0126 16:10:06.581491 4680 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.056750 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.127777 4680 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.175770 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.215858 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.320542 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.320948 4680 scope.go:117] "RemoveContainer" containerID="8e16112d386568fea6786e059d6ec9045526971187bd816863cd3d741b8d3afb" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.321002 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.345589 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.631630 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 16:10:07 crc kubenswrapper[4680]: I0126 16:10:07.816716 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.383880 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vvkm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.383914 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vvkm4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.384662 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.384777 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.423631 4680 generic.go:334] "Generic (PLEG): container finished" podID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerID="512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198" exitCode=0 Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.423694 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" event={"ID":"9b43e189-43b7-4c00-a149-fee8236f2e22","Type":"ContainerDied","Data":"512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198"} Jan 26 16:10:24 crc kubenswrapper[4680]: I0126 16:10:24.424305 4680 scope.go:117] "RemoveContainer" containerID="512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198" Jan 26 16:10:25 crc kubenswrapper[4680]: I0126 16:10:25.016806 4680 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 16:10:25 crc kubenswrapper[4680]: I0126 16:10:25.432748 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" event={"ID":"9b43e189-43b7-4c00-a149-fee8236f2e22","Type":"ContainerStarted","Data":"f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad"} Jan 26 16:10:25 crc kubenswrapper[4680]: I0126 16:10:25.434048 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:10:25 crc kubenswrapper[4680]: I0126 16:10:25.438537 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:10:30 crc kubenswrapper[4680]: I0126 16:10:30.528936 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-khttt"] Jan 26 16:10:30 crc kubenswrapper[4680]: I0126 16:10:30.531275 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" podUID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" containerName="controller-manager" containerID="cri-o://a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802" gracePeriod=30 Jan 26 16:10:30 crc kubenswrapper[4680]: I0126 16:10:30.622538 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v"] Jan 26 16:10:30 crc kubenswrapper[4680]: I0126 16:10:30.622958 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" podUID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" containerName="route-controller-manager" containerID="cri-o://478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818" gracePeriod=30 Jan 26 16:10:30 crc kubenswrapper[4680]: I0126 16:10:30.941938 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:10:30 crc kubenswrapper[4680]: I0126 16:10:30.989562 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.000947 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-serving-cert\") pod \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.001012 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-proxy-ca-bundles\") pod \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.001037 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-469c9\" (UniqueName: \"kubernetes.io/projected/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-kube-api-access-469c9\") pod \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.001059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-config\") pod \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.001095 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-client-ca\") pod \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\" (UID: \"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.002585 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-client-ca" (OuterVolumeSpecName: "client-ca") pod "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" (UID: "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.006472 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" (UID: "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.007194 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-config" (OuterVolumeSpecName: "config") pod "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" (UID: "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.008161 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-kube-api-access-469c9" (OuterVolumeSpecName: "kube-api-access-469c9") pod "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" (UID: "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca"). InnerVolumeSpecName "kube-api-access-469c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.008380 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" (UID: "1372a4ea-cf38-4ec7-afe5-90e7e1d22dca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102493 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-client-ca\") pod \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102566 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vmv\" (UniqueName: \"kubernetes.io/projected/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-kube-api-access-z8vmv\") pod \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102592 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-serving-cert\") pod \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102636 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-config\") pod \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\" (UID: \"36cf56f2-0fb2-4172-be72-a6c8097a2bf5\") " Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102820 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102831 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102841 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-469c9\" (UniqueName: \"kubernetes.io/projected/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-kube-api-access-469c9\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102850 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.102857 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.103454 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-client-ca" (OuterVolumeSpecName: "client-ca") pod "36cf56f2-0fb2-4172-be72-a6c8097a2bf5" (UID: "36cf56f2-0fb2-4172-be72-a6c8097a2bf5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.103576 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-config" (OuterVolumeSpecName: "config") pod "36cf56f2-0fb2-4172-be72-a6c8097a2bf5" (UID: "36cf56f2-0fb2-4172-be72-a6c8097a2bf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.105664 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36cf56f2-0fb2-4172-be72-a6c8097a2bf5" (UID: "36cf56f2-0fb2-4172-be72-a6c8097a2bf5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.105762 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-kube-api-access-z8vmv" (OuterVolumeSpecName: "kube-api-access-z8vmv") pod "36cf56f2-0fb2-4172-be72-a6c8097a2bf5" (UID: "36cf56f2-0fb2-4172-be72-a6c8097a2bf5"). InnerVolumeSpecName "kube-api-access-z8vmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.203658 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.203695 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vmv\" (UniqueName: \"kubernetes.io/projected/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-kube-api-access-z8vmv\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.203707 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.203745 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cf56f2-0fb2-4172-be72-a6c8097a2bf5-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.480555 4680 generic.go:334] "Generic (PLEG): container finished" podID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" containerID="a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802" exitCode=0 Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.480576 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.480623 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" event={"ID":"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca","Type":"ContainerDied","Data":"a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802"} Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.480673 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-khttt" event={"ID":"1372a4ea-cf38-4ec7-afe5-90e7e1d22dca","Type":"ContainerDied","Data":"f7e4ec26488ac4a5a7a4a2a0314998d2893d858f0393b2dac8c61e4017a83a6d"} Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.480700 4680 scope.go:117] "RemoveContainer" containerID="a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.482380 4680 generic.go:334] "Generic (PLEG): container finished" podID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" containerID="478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818" exitCode=0 Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.482399 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" event={"ID":"36cf56f2-0fb2-4172-be72-a6c8097a2bf5","Type":"ContainerDied","Data":"478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818"} Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.482415 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" event={"ID":"36cf56f2-0fb2-4172-be72-a6c8097a2bf5","Type":"ContainerDied","Data":"10a3faa4db7c84c939e19af811f5eb69ee9c537a8abbc2ff90fc6794fac4b978"} Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.482460 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.494530 4680 scope.go:117] "RemoveContainer" containerID="a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802" Jan 26 16:10:31 crc kubenswrapper[4680]: E0126 16:10:31.496526 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802\": container with ID starting with a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802 not found: ID does not exist" containerID="a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.496637 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802"} err="failed to get container status \"a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802\": rpc error: code = NotFound desc = could not find container \"a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802\": container with ID starting with a167fa7c313ae5dc10cabc7559177d60a6611945c7fe85a21ca1558dab0e7802 not found: ID does not exist" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.496704 4680 scope.go:117] "RemoveContainer" containerID="478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.499587 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-khttt"] Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.503306 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-khttt"] Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.512705 4680 scope.go:117] "RemoveContainer" containerID="478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818" Jan 26 16:10:31 crc kubenswrapper[4680]: E0126 16:10:31.513234 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818\": container with ID starting with 478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818 not found: ID does not exist" containerID="478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.513279 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818"} err="failed to get container status \"478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818\": rpc error: code = NotFound desc = could not find container \"478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818\": container with ID starting with 478c251b8eb778dfb08784a8b90ae5a9f0e85caebff53268f701871dbe256818 not found: ID does not exist" Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.518864 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v"] Jan 26 16:10:31 crc kubenswrapper[4680]: I0126 16:10:31.527180 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6m6v"] Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.250590 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-759b4c7f9b-clbvb"] Jan 26 16:10:32 crc kubenswrapper[4680]: E0126 16:10:32.252015 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" containerName="controller-manager" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.252159 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" containerName="controller-manager" Jan 26 16:10:32 crc kubenswrapper[4680]: E0126 16:10:32.252254 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" containerName="route-controller-manager" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.252331 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" containerName="route-controller-manager" Jan 26 16:10:32 crc kubenswrapper[4680]: E0126 16:10:32.252418 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.252502 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.252688 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" containerName="controller-manager" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.252777 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.252861 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" containerName="route-controller-manager" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.253407 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.255152 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn"] Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.255766 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.258879 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.259208 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.259258 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.260312 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.260415 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.260425 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.260583 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.261056 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.261276 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.261348 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.261407 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.261528 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.273592 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn"] Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.274906 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.278236 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-759b4c7f9b-clbvb"] Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318580 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-client-ca\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318630 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mxm\" (UniqueName: \"kubernetes.io/projected/5f829ac2-5694-475e-9eab-e8c8216bdefe-kube-api-access-n2mxm\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318665 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84441839-5310-4707-84bc-9c64b5c78464-serving-cert\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318739 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-config\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318793 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-proxy-ca-bundles\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318826 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-config\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318857 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f829ac2-5694-475e-9eab-e8c8216bdefe-serving-cert\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-client-ca\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.318962 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9hcz\" (UniqueName: \"kubernetes.io/projected/84441839-5310-4707-84bc-9c64b5c78464-kube-api-access-l9hcz\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420406 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2mxm\" (UniqueName: \"kubernetes.io/projected/5f829ac2-5694-475e-9eab-e8c8216bdefe-kube-api-access-n2mxm\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420773 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84441839-5310-4707-84bc-9c64b5c78464-serving-cert\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420809 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-config\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420846 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-proxy-ca-bundles\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420874 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-config\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f829ac2-5694-475e-9eab-e8c8216bdefe-serving-cert\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420926 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-client-ca\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420950 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9hcz\" (UniqueName: \"kubernetes.io/projected/84441839-5310-4707-84bc-9c64b5c78464-kube-api-access-l9hcz\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.420983 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-client-ca\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.421967 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-client-ca\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.422742 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-client-ca\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.423566 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-config\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.423655 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-config\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.424387 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-proxy-ca-bundles\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.432928 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84441839-5310-4707-84bc-9c64b5c78464-serving-cert\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.432944 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f829ac2-5694-475e-9eab-e8c8216bdefe-serving-cert\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.442054 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2mxm\" (UniqueName: \"kubernetes.io/projected/5f829ac2-5694-475e-9eab-e8c8216bdefe-kube-api-access-n2mxm\") pod \"controller-manager-759b4c7f9b-clbvb\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.447041 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9hcz\" (UniqueName: \"kubernetes.io/projected/84441839-5310-4707-84bc-9c64b5c78464-kube-api-access-l9hcz\") pod \"route-controller-manager-5479c9f4cd-qtgjn\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.581724 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.591764 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.882324 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn"] Jan 26 16:10:32 crc kubenswrapper[4680]: W0126 16:10:32.895760 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84441839_5310_4707_84bc_9c64b5c78464.slice/crio-f51bed79ec5bfda414832defed646ab02e54bd9bf8e3fa59016cf4cc757af076 WatchSource:0}: Error finding container f51bed79ec5bfda414832defed646ab02e54bd9bf8e3fa59016cf4cc757af076: Status 404 returned error can't find the container with id f51bed79ec5bfda414832defed646ab02e54bd9bf8e3fa59016cf4cc757af076 Jan 26 16:10:32 crc kubenswrapper[4680]: I0126 16:10:32.944811 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-759b4c7f9b-clbvb"] Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.177574 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1372a4ea-cf38-4ec7-afe5-90e7e1d22dca" path="/var/lib/kubelet/pods/1372a4ea-cf38-4ec7-afe5-90e7e1d22dca/volumes" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.178546 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cf56f2-0fb2-4172-be72-a6c8097a2bf5" path="/var/lib/kubelet/pods/36cf56f2-0fb2-4172-be72-a6c8097a2bf5/volumes" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.230228 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xbp7x"] Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.230980 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.245116 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xbp7x"] Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331380 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331464 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/883e65ab-388e-440a-9ffc-31afbaeae747-trusted-ca\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331501 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/883e65ab-388e-440a-9ffc-31afbaeae747-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331539 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xh9\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-kube-api-access-82xh9\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331563 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-registry-tls\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331589 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-bound-sa-token\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331625 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/883e65ab-388e-440a-9ffc-31afbaeae747-registry-certificates\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.331652 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/883e65ab-388e-440a-9ffc-31afbaeae747-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/883e65ab-388e-440a-9ffc-31afbaeae747-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433195 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xh9\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-kube-api-access-82xh9\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433227 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-registry-tls\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-bound-sa-token\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433292 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/883e65ab-388e-440a-9ffc-31afbaeae747-registry-certificates\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433320 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/883e65ab-388e-440a-9ffc-31afbaeae747-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.433390 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/883e65ab-388e-440a-9ffc-31afbaeae747-trusted-ca\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.434262 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/883e65ab-388e-440a-9ffc-31afbaeae747-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.434580 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/883e65ab-388e-440a-9ffc-31afbaeae747-trusted-ca\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.434711 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/883e65ab-388e-440a-9ffc-31afbaeae747-registry-certificates\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.439100 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-registry-tls\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.440278 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/883e65ab-388e-440a-9ffc-31afbaeae747-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.455354 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xh9\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-kube-api-access-82xh9\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.477084 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/883e65ab-388e-440a-9ffc-31afbaeae747-bound-sa-token\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.496784 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" event={"ID":"5f829ac2-5694-475e-9eab-e8c8216bdefe","Type":"ContainerStarted","Data":"9de6a8c2a9cbdd865afff8f012cd86140c1b473f7ec4e004b06a09de5fad79fd"} Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.496848 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" event={"ID":"5f829ac2-5694-475e-9eab-e8c8216bdefe","Type":"ContainerStarted","Data":"a081a99f94cb1390e4c1bb5790eb1cb99a0b7bcb3186e5e464bb29f9092ed52d"} Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.496978 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.499091 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" event={"ID":"84441839-5310-4707-84bc-9c64b5c78464","Type":"ContainerStarted","Data":"5d2612b632e244fd27576c1ea43ba2d8051c41cc5e7af088248602a7adcd4018"} Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.499124 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" event={"ID":"84441839-5310-4707-84bc-9c64b5c78464","Type":"ContainerStarted","Data":"f51bed79ec5bfda414832defed646ab02e54bd9bf8e3fa59016cf4cc757af076"} Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.499825 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.502538 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.511955 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.524893 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xbp7x\" (UID: \"883e65ab-388e-440a-9ffc-31afbaeae747\") " pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.548042 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.610469 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" podStartSLOduration=3.610448718 podStartE2EDuration="3.610448718s" podCreationTimestamp="2026-01-26 16:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:10:33.570179214 +0000 UTC m=+308.731451483" watchObservedRunningTime="2026-01-26 16:10:33.610448718 +0000 UTC m=+308.771720987" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.610982 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" podStartSLOduration=3.610976233 podStartE2EDuration="3.610976233s" podCreationTimestamp="2026-01-26 16:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:10:33.608996576 +0000 UTC m=+308.770268845" watchObservedRunningTime="2026-01-26 16:10:33.610976233 +0000 UTC m=+308.772248502" Jan 26 16:10:33 crc kubenswrapper[4680]: I0126 16:10:33.988286 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xbp7x"] Jan 26 16:10:34 crc kubenswrapper[4680]: I0126 16:10:34.505346 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" event={"ID":"883e65ab-388e-440a-9ffc-31afbaeae747","Type":"ContainerStarted","Data":"aa54542eaa7acaf90787dbfb160b11aa58ae1c4e0572702a4189ff63ca0d63d0"} Jan 26 16:10:34 crc kubenswrapper[4680]: I0126 16:10:34.505404 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" event={"ID":"883e65ab-388e-440a-9ffc-31afbaeae747","Type":"ContainerStarted","Data":"816aab6df974bc51f138cacc993d2d3a0e039e016d847856142ad17871560542"} Jan 26 16:10:34 crc kubenswrapper[4680]: I0126 16:10:34.531664 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" podStartSLOduration=1.531646423 podStartE2EDuration="1.531646423s" podCreationTimestamp="2026-01-26 16:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:10:34.531286733 +0000 UTC m=+309.692559012" watchObservedRunningTime="2026-01-26 16:10:34.531646423 +0000 UTC m=+309.692918692" Jan 26 16:10:35 crc kubenswrapper[4680]: I0126 16:10:35.509722 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:53 crc kubenswrapper[4680]: I0126 16:10:53.555214 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xbp7x" Jan 26 16:10:53 crc kubenswrapper[4680]: I0126 16:10:53.607781 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pzz4v"] Jan 26 16:11:16 crc kubenswrapper[4680]: I0126 16:11:16.980659 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:11:16 crc kubenswrapper[4680]: I0126 16:11:16.981207 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:11:18 crc kubenswrapper[4680]: I0126 16:11:18.649373 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" podUID="bdfe3694-fc1a-4262-85ea-413fad222b35" containerName="registry" containerID="cri-o://de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9" gracePeriod=30 Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.008735 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120615 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120669 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-bound-sa-token\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120725 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfe3694-fc1a-4262-85ea-413fad222b35-ca-trust-extracted\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120772 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-tls\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120799 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-trusted-ca\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120826 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jhq4\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-kube-api-access-6jhq4\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120862 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfe3694-fc1a-4262-85ea-413fad222b35-installation-pull-secrets\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.120892 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-certificates\") pod \"bdfe3694-fc1a-4262-85ea-413fad222b35\" (UID: \"bdfe3694-fc1a-4262-85ea-413fad222b35\") " Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.122002 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.122211 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.127051 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.127491 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfe3694-fc1a-4262-85ea-413fad222b35-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.128606 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.131118 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.137879 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-kube-api-access-6jhq4" (OuterVolumeSpecName: "kube-api-access-6jhq4") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "kube-api-access-6jhq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.141206 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfe3694-fc1a-4262-85ea-413fad222b35-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bdfe3694-fc1a-4262-85ea-413fad222b35" (UID: "bdfe3694-fc1a-4262-85ea-413fad222b35"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222719 4680 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfe3694-fc1a-4262-85ea-413fad222b35-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222752 4680 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222765 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222773 4680 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfe3694-fc1a-4262-85ea-413fad222b35-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222782 4680 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222792 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe3694-fc1a-4262-85ea-413fad222b35-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.222801 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jhq4\" (UniqueName: \"kubernetes.io/projected/bdfe3694-fc1a-4262-85ea-413fad222b35-kube-api-access-6jhq4\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.734452 4680 generic.go:334] "Generic (PLEG): container finished" podID="bdfe3694-fc1a-4262-85ea-413fad222b35" containerID="de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9" exitCode=0 Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.734491 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" event={"ID":"bdfe3694-fc1a-4262-85ea-413fad222b35","Type":"ContainerDied","Data":"de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9"} Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.734525 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" event={"ID":"bdfe3694-fc1a-4262-85ea-413fad222b35","Type":"ContainerDied","Data":"0bc26e327f728c2eccf566e65664de60920a8cdfad680f153d33bac17ad70966"} Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.734542 4680 scope.go:117] "RemoveContainer" containerID="de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.734552 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pzz4v" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.754481 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pzz4v"] Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.758384 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pzz4v"] Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.763841 4680 scope.go:117] "RemoveContainer" containerID="de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9" Jan 26 16:11:19 crc kubenswrapper[4680]: E0126 16:11:19.764383 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9\": container with ID starting with de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9 not found: ID does not exist" containerID="de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9" Jan 26 16:11:19 crc kubenswrapper[4680]: I0126 16:11:19.764416 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9"} err="failed to get container status \"de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9\": rpc error: code = NotFound desc = could not find container \"de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9\": container with ID starting with de9cb462b99e236990495d80f30e002cb0aa4a11d7562413b1f79cbf335d8fa9 not found: ID does not exist" Jan 26 16:11:21 crc kubenswrapper[4680]: I0126 16:11:21.175448 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfe3694-fc1a-4262-85ea-413fad222b35" path="/var/lib/kubelet/pods/bdfe3694-fc1a-4262-85ea-413fad222b35/volumes" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.512577 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-759b4c7f9b-clbvb"] Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.513516 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" podUID="5f829ac2-5694-475e-9eab-e8c8216bdefe" containerName="controller-manager" containerID="cri-o://9de6a8c2a9cbdd865afff8f012cd86140c1b473f7ec4e004b06a09de5fad79fd" gracePeriod=30 Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.542160 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn"] Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.542359 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" podUID="84441839-5310-4707-84bc-9c64b5c78464" containerName="route-controller-manager" containerID="cri-o://5d2612b632e244fd27576c1ea43ba2d8051c41cc5e7af088248602a7adcd4018" gracePeriod=30 Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.793924 4680 generic.go:334] "Generic (PLEG): container finished" podID="5f829ac2-5694-475e-9eab-e8c8216bdefe" containerID="9de6a8c2a9cbdd865afff8f012cd86140c1b473f7ec4e004b06a09de5fad79fd" exitCode=0 Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.794012 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" event={"ID":"5f829ac2-5694-475e-9eab-e8c8216bdefe","Type":"ContainerDied","Data":"9de6a8c2a9cbdd865afff8f012cd86140c1b473f7ec4e004b06a09de5fad79fd"} Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.795776 4680 generic.go:334] "Generic (PLEG): container finished" podID="84441839-5310-4707-84bc-9c64b5c78464" containerID="5d2612b632e244fd27576c1ea43ba2d8051c41cc5e7af088248602a7adcd4018" exitCode=0 Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.795810 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" event={"ID":"84441839-5310-4707-84bc-9c64b5c78464","Type":"ContainerDied","Data":"5d2612b632e244fd27576c1ea43ba2d8051c41cc5e7af088248602a7adcd4018"} Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.935324 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.982290 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9hcz\" (UniqueName: \"kubernetes.io/projected/84441839-5310-4707-84bc-9c64b5c78464-kube-api-access-l9hcz\") pod \"84441839-5310-4707-84bc-9c64b5c78464\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.982331 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-client-ca\") pod \"84441839-5310-4707-84bc-9c64b5c78464\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.982355 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84441839-5310-4707-84bc-9c64b5c78464-serving-cert\") pod \"84441839-5310-4707-84bc-9c64b5c78464\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.982374 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-config\") pod \"84441839-5310-4707-84bc-9c64b5c78464\" (UID: \"84441839-5310-4707-84bc-9c64b5c78464\") " Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.983001 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-config" (OuterVolumeSpecName: "config") pod "84441839-5310-4707-84bc-9c64b5c78464" (UID: "84441839-5310-4707-84bc-9c64b5c78464"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.983267 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-client-ca" (OuterVolumeSpecName: "client-ca") pod "84441839-5310-4707-84bc-9c64b5c78464" (UID: "84441839-5310-4707-84bc-9c64b5c78464"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.983488 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.983522 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84441839-5310-4707-84bc-9c64b5c78464-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.988240 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84441839-5310-4707-84bc-9c64b5c78464-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84441839-5310-4707-84bc-9c64b5c78464" (UID: "84441839-5310-4707-84bc-9c64b5c78464"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.988289 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84441839-5310-4707-84bc-9c64b5c78464-kube-api-access-l9hcz" (OuterVolumeSpecName: "kube-api-access-l9hcz") pod "84441839-5310-4707-84bc-9c64b5c78464" (UID: "84441839-5310-4707-84bc-9c64b5c78464"). InnerVolumeSpecName "kube-api-access-l9hcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:30 crc kubenswrapper[4680]: I0126 16:11:30.991269 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084553 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f829ac2-5694-475e-9eab-e8c8216bdefe-serving-cert\") pod \"5f829ac2-5694-475e-9eab-e8c8216bdefe\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084611 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2mxm\" (UniqueName: \"kubernetes.io/projected/5f829ac2-5694-475e-9eab-e8c8216bdefe-kube-api-access-n2mxm\") pod \"5f829ac2-5694-475e-9eab-e8c8216bdefe\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-client-ca\") pod \"5f829ac2-5694-475e-9eab-e8c8216bdefe\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084658 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-proxy-ca-bundles\") pod \"5f829ac2-5694-475e-9eab-e8c8216bdefe\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084702 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-config\") pod \"5f829ac2-5694-475e-9eab-e8c8216bdefe\" (UID: \"5f829ac2-5694-475e-9eab-e8c8216bdefe\") " Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084883 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9hcz\" (UniqueName: \"kubernetes.io/projected/84441839-5310-4707-84bc-9c64b5c78464-kube-api-access-l9hcz\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.084895 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84441839-5310-4707-84bc-9c64b5c78464-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.085713 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-config" (OuterVolumeSpecName: "config") pod "5f829ac2-5694-475e-9eab-e8c8216bdefe" (UID: "5f829ac2-5694-475e-9eab-e8c8216bdefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.086557 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5f829ac2-5694-475e-9eab-e8c8216bdefe" (UID: "5f829ac2-5694-475e-9eab-e8c8216bdefe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.086684 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-client-ca" (OuterVolumeSpecName: "client-ca") pod "5f829ac2-5694-475e-9eab-e8c8216bdefe" (UID: "5f829ac2-5694-475e-9eab-e8c8216bdefe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.088873 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f829ac2-5694-475e-9eab-e8c8216bdefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5f829ac2-5694-475e-9eab-e8c8216bdefe" (UID: "5f829ac2-5694-475e-9eab-e8c8216bdefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.088869 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f829ac2-5694-475e-9eab-e8c8216bdefe-kube-api-access-n2mxm" (OuterVolumeSpecName: "kube-api-access-n2mxm") pod "5f829ac2-5694-475e-9eab-e8c8216bdefe" (UID: "5f829ac2-5694-475e-9eab-e8c8216bdefe"). InnerVolumeSpecName "kube-api-access-n2mxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.185514 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f829ac2-5694-475e-9eab-e8c8216bdefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.185576 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2mxm\" (UniqueName: \"kubernetes.io/projected/5f829ac2-5694-475e-9eab-e8c8216bdefe-kube-api-access-n2mxm\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.185589 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.185598 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.185608 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f829ac2-5694-475e-9eab-e8c8216bdefe-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.802271 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" event={"ID":"5f829ac2-5694-475e-9eab-e8c8216bdefe","Type":"ContainerDied","Data":"a081a99f94cb1390e4c1bb5790eb1cb99a0b7bcb3186e5e464bb29f9092ed52d"} Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.802686 4680 scope.go:117] "RemoveContainer" containerID="9de6a8c2a9cbdd865afff8f012cd86140c1b473f7ec4e004b06a09de5fad79fd" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.802566 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759b4c7f9b-clbvb" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.805193 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" event={"ID":"84441839-5310-4707-84bc-9c64b5c78464","Type":"ContainerDied","Data":"f51bed79ec5bfda414832defed646ab02e54bd9bf8e3fa59016cf4cc757af076"} Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.805306 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.819930 4680 scope.go:117] "RemoveContainer" containerID="5d2612b632e244fd27576c1ea43ba2d8051c41cc5e7af088248602a7adcd4018" Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.830327 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn"] Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.836843 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5479c9f4cd-qtgjn"] Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.848127 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-759b4c7f9b-clbvb"] Jan 26 16:11:31 crc kubenswrapper[4680]: I0126 16:11:31.852092 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-759b4c7f9b-clbvb"] Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.300697 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h"] Jan 26 16:11:32 crc kubenswrapper[4680]: E0126 16:11:32.300935 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfe3694-fc1a-4262-85ea-413fad222b35" containerName="registry" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.300947 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfe3694-fc1a-4262-85ea-413fad222b35" containerName="registry" Jan 26 16:11:32 crc kubenswrapper[4680]: E0126 16:11:32.300963 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84441839-5310-4707-84bc-9c64b5c78464" containerName="route-controller-manager" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.300991 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="84441839-5310-4707-84bc-9c64b5c78464" containerName="route-controller-manager" Jan 26 16:11:32 crc kubenswrapper[4680]: E0126 16:11:32.301008 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f829ac2-5694-475e-9eab-e8c8216bdefe" containerName="controller-manager" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.301015 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f829ac2-5694-475e-9eab-e8c8216bdefe" containerName="controller-manager" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.301127 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfe3694-fc1a-4262-85ea-413fad222b35" containerName="registry" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.301143 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="84441839-5310-4707-84bc-9c64b5c78464" containerName="route-controller-manager" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.301150 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f829ac2-5694-475e-9eab-e8c8216bdefe" containerName="controller-manager" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.301501 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.301955 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8"] Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.302622 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.304501 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.304959 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.305230 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.305484 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.306232 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.306531 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.306786 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.307670 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.308239 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.308793 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.308903 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.314183 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.318116 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.324002 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h"] Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.324043 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8"] Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.398733 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-client-ca\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399007 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-proxy-ca-bundles\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399157 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1a5a-420b-459d-bc28-071edd6dca3e-client-ca\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399279 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9sb\" (UniqueName: \"kubernetes.io/projected/e7b8a972-ec6d-4501-80ed-cdcaba552029-kube-api-access-9b9sb\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399378 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-config\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399497 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b8a972-ec6d-4501-80ed-cdcaba552029-serving-cert\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399600 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1a5a-420b-459d-bc28-071edd6dca3e-config\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399691 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7bl\" (UniqueName: \"kubernetes.io/projected/41ae1a5a-420b-459d-bc28-071edd6dca3e-kube-api-access-4b7bl\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.399786 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1a5a-420b-459d-bc28-071edd6dca3e-serving-cert\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501263 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b8a972-ec6d-4501-80ed-cdcaba552029-serving-cert\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501312 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1a5a-420b-459d-bc28-071edd6dca3e-config\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501349 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7bl\" (UniqueName: \"kubernetes.io/projected/41ae1a5a-420b-459d-bc28-071edd6dca3e-kube-api-access-4b7bl\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501384 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1a5a-420b-459d-bc28-071edd6dca3e-serving-cert\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501439 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-client-ca\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501472 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-proxy-ca-bundles\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501497 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1a5a-420b-459d-bc28-071edd6dca3e-client-ca\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501532 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9sb\" (UniqueName: \"kubernetes.io/projected/e7b8a972-ec6d-4501-80ed-cdcaba552029-kube-api-access-9b9sb\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.501560 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-config\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.502912 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-config\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.503020 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-proxy-ca-bundles\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.503506 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b8a972-ec6d-4501-80ed-cdcaba552029-client-ca\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.503771 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1a5a-420b-459d-bc28-071edd6dca3e-config\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.504593 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1a5a-420b-459d-bc28-071edd6dca3e-client-ca\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.506583 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1a5a-420b-459d-bc28-071edd6dca3e-serving-cert\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.518340 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b8a972-ec6d-4501-80ed-cdcaba552029-serving-cert\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.525809 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7bl\" (UniqueName: \"kubernetes.io/projected/41ae1a5a-420b-459d-bc28-071edd6dca3e-kube-api-access-4b7bl\") pod \"route-controller-manager-798fbb5b47-4kwr8\" (UID: \"41ae1a5a-420b-459d-bc28-071edd6dca3e\") " pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.527793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9sb\" (UniqueName: \"kubernetes.io/projected/e7b8a972-ec6d-4501-80ed-cdcaba552029-kube-api-access-9b9sb\") pod \"controller-manager-6d986fd6d8-rbc4h\" (UID: \"e7b8a972-ec6d-4501-80ed-cdcaba552029\") " pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.620164 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.669797 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.845671 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h"] Jan 26 16:11:32 crc kubenswrapper[4680]: I0126 16:11:32.918459 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8"] Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.176139 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f829ac2-5694-475e-9eab-e8c8216bdefe" path="/var/lib/kubelet/pods/5f829ac2-5694-475e-9eab-e8c8216bdefe/volumes" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.177170 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84441839-5310-4707-84bc-9c64b5c78464" path="/var/lib/kubelet/pods/84441839-5310-4707-84bc-9c64b5c78464/volumes" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.817834 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" event={"ID":"e7b8a972-ec6d-4501-80ed-cdcaba552029","Type":"ContainerStarted","Data":"8a054495c4beeff2d8079a0f99a6821754e44773e1514cc89fac776e83440263"} Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.818136 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.818146 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" event={"ID":"e7b8a972-ec6d-4501-80ed-cdcaba552029","Type":"ContainerStarted","Data":"e6dc0aa1b6bc9181648e6bd3aa1a562519822dadbfaa13b05a91bb87d13dd718"} Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.819473 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" event={"ID":"41ae1a5a-420b-459d-bc28-071edd6dca3e","Type":"ContainerStarted","Data":"b6de83640e6a09a4d1869d9013148b32830a6fc40b4dbbde3cadc80d3bf0918e"} Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.819582 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" event={"ID":"41ae1a5a-420b-459d-bc28-071edd6dca3e","Type":"ContainerStarted","Data":"1d41d38dded02c4e807524fda37dc790c1e77aca2af36c530b67fd6d124b55b8"} Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.819772 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.823862 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.825612 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.840495 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" podStartSLOduration=3.840477373 podStartE2EDuration="3.840477373s" podCreationTimestamp="2026-01-26 16:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:11:33.837802547 +0000 UTC m=+368.999074816" watchObservedRunningTime="2026-01-26 16:11:33.840477373 +0000 UTC m=+369.001749632" Jan 26 16:11:33 crc kubenswrapper[4680]: I0126 16:11:33.853874 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" podStartSLOduration=3.853856212 podStartE2EDuration="3.853856212s" podCreationTimestamp="2026-01-26 16:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:11:33.851413843 +0000 UTC m=+369.012686112" watchObservedRunningTime="2026-01-26 16:11:33.853856212 +0000 UTC m=+369.015128481" Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.875438 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsnkg"] Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.876999 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsnkg" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="registry-server" containerID="cri-o://78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0" gracePeriod=30 Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.884507 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hlx8"] Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.884830 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hlx8" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="registry-server" containerID="cri-o://00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9" gracePeriod=30 Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.900151 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vvkm4"] Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.900355 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" containerID="cri-o://f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad" gracePeriod=30 Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.909621 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zsc"] Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.910208 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-76zsc" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="registry-server" containerID="cri-o://fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322" gracePeriod=30 Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.922507 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzvqm"] Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.923131 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.932466 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s74p8"] Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.932678 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s74p8" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="registry-server" containerID="cri-o://ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0" gracePeriod=30 Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.945161 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5283315c-decc-4a61-aee5-74715a2f2393-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.945215 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5283315c-decc-4a61-aee5-74715a2f2393-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.945343 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spr5r\" (UniqueName: \"kubernetes.io/projected/5283315c-decc-4a61-aee5-74715a2f2393-kube-api-access-spr5r\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:35 crc kubenswrapper[4680]: I0126 16:11:35.949033 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzvqm"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.046771 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spr5r\" (UniqueName: \"kubernetes.io/projected/5283315c-decc-4a61-aee5-74715a2f2393-kube-api-access-spr5r\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.047138 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5283315c-decc-4a61-aee5-74715a2f2393-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.047167 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5283315c-decc-4a61-aee5-74715a2f2393-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.048972 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5283315c-decc-4a61-aee5-74715a2f2393-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.056524 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5283315c-decc-4a61-aee5-74715a2f2393-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.067638 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spr5r\" (UniqueName: \"kubernetes.io/projected/5283315c-decc-4a61-aee5-74715a2f2393-kube-api-access-spr5r\") pod \"marketplace-operator-79b997595-xzvqm\" (UID: \"5283315c-decc-4a61-aee5-74715a2f2393\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.246390 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.346636 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.450858 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ljp5\" (UniqueName: \"kubernetes.io/projected/692a260c-34fe-45b3-8ee0-1f438a630beb-kube-api-access-8ljp5\") pod \"692a260c-34fe-45b3-8ee0-1f438a630beb\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.451153 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-catalog-content\") pod \"692a260c-34fe-45b3-8ee0-1f438a630beb\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.451178 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-utilities\") pod \"692a260c-34fe-45b3-8ee0-1f438a630beb\" (UID: \"692a260c-34fe-45b3-8ee0-1f438a630beb\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.452345 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-utilities" (OuterVolumeSpecName: "utilities") pod "692a260c-34fe-45b3-8ee0-1f438a630beb" (UID: "692a260c-34fe-45b3-8ee0-1f438a630beb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.459292 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692a260c-34fe-45b3-8ee0-1f438a630beb-kube-api-access-8ljp5" (OuterVolumeSpecName: "kube-api-access-8ljp5") pod "692a260c-34fe-45b3-8ee0-1f438a630beb" (UID: "692a260c-34fe-45b3-8ee0-1f438a630beb"). InnerVolumeSpecName "kube-api-access-8ljp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.519369 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.519975 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.540462 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "692a260c-34fe-45b3-8ee0-1f438a630beb" (UID: "692a260c-34fe-45b3-8ee0-1f438a630beb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.551771 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552535 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-catalog-content\") pod \"0541f242-a3cd-490a-9e63-3f1278f05dc6\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552577 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pkz4\" (UniqueName: \"kubernetes.io/projected/a25355b2-4808-4605-a4a7-b51d677ad232-kube-api-access-4pkz4\") pod \"a25355b2-4808-4605-a4a7-b51d677ad232\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552639 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-utilities\") pod \"a25355b2-4808-4605-a4a7-b51d677ad232\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552666 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-catalog-content\") pod \"a25355b2-4808-4605-a4a7-b51d677ad232\" (UID: \"a25355b2-4808-4605-a4a7-b51d677ad232\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552692 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rbbv\" (UniqueName: \"kubernetes.io/projected/0541f242-a3cd-490a-9e63-3f1278f05dc6-kube-api-access-4rbbv\") pod \"0541f242-a3cd-490a-9e63-3f1278f05dc6\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552715 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-utilities\") pod \"0541f242-a3cd-490a-9e63-3f1278f05dc6\" (UID: \"0541f242-a3cd-490a-9e63-3f1278f05dc6\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552899 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ljp5\" (UniqueName: \"kubernetes.io/projected/692a260c-34fe-45b3-8ee0-1f438a630beb-kube-api-access-8ljp5\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552916 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.552926 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692a260c-34fe-45b3-8ee0-1f438a630beb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.553534 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-utilities" (OuterVolumeSpecName: "utilities") pod "0541f242-a3cd-490a-9e63-3f1278f05dc6" (UID: "0541f242-a3cd-490a-9e63-3f1278f05dc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.556760 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-utilities" (OuterVolumeSpecName: "utilities") pod "a25355b2-4808-4605-a4a7-b51d677ad232" (UID: "a25355b2-4808-4605-a4a7-b51d677ad232"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.558838 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0541f242-a3cd-490a-9e63-3f1278f05dc6-kube-api-access-4rbbv" (OuterVolumeSpecName: "kube-api-access-4rbbv") pod "0541f242-a3cd-490a-9e63-3f1278f05dc6" (UID: "0541f242-a3cd-490a-9e63-3f1278f05dc6"). InnerVolumeSpecName "kube-api-access-4rbbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.559134 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25355b2-4808-4605-a4a7-b51d677ad232-kube-api-access-4pkz4" (OuterVolumeSpecName: "kube-api-access-4pkz4") pod "a25355b2-4808-4605-a4a7-b51d677ad232" (UID: "a25355b2-4808-4605-a4a7-b51d677ad232"). InnerVolumeSpecName "kube-api-access-4pkz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.580607 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.653581 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-trusted-ca\") pod \"9b43e189-43b7-4c00-a149-fee8236f2e22\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.653623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsggw\" (UniqueName: \"kubernetes.io/projected/9b43e189-43b7-4c00-a149-fee8236f2e22-kube-api-access-vsggw\") pod \"9b43e189-43b7-4c00-a149-fee8236f2e22\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.653662 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-utilities\") pod \"4518f8bc-7ce9-40ee-8b35-263609e549aa\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.653690 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-catalog-content\") pod \"4518f8bc-7ce9-40ee-8b35-263609e549aa\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.653840 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-operator-metrics\") pod \"9b43e189-43b7-4c00-a149-fee8236f2e22\" (UID: \"9b43e189-43b7-4c00-a149-fee8236f2e22\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.653873 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj7gh\" (UniqueName: \"kubernetes.io/projected/4518f8bc-7ce9-40ee-8b35-263609e549aa-kube-api-access-fj7gh\") pod \"4518f8bc-7ce9-40ee-8b35-263609e549aa\" (UID: \"4518f8bc-7ce9-40ee-8b35-263609e549aa\") " Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.654087 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.654099 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rbbv\" (UniqueName: \"kubernetes.io/projected/0541f242-a3cd-490a-9e63-3f1278f05dc6-kube-api-access-4rbbv\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.654109 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.654118 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pkz4\" (UniqueName: \"kubernetes.io/projected/a25355b2-4808-4605-a4a7-b51d677ad232-kube-api-access-4pkz4\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.655812 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9b43e189-43b7-4c00-a149-fee8236f2e22" (UID: "9b43e189-43b7-4c00-a149-fee8236f2e22"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.656160 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-utilities" (OuterVolumeSpecName: "utilities") pod "4518f8bc-7ce9-40ee-8b35-263609e549aa" (UID: "4518f8bc-7ce9-40ee-8b35-263609e549aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.657710 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25355b2-4808-4605-a4a7-b51d677ad232" (UID: "a25355b2-4808-4605-a4a7-b51d677ad232"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.659613 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b43e189-43b7-4c00-a149-fee8236f2e22-kube-api-access-vsggw" (OuterVolumeSpecName: "kube-api-access-vsggw") pod "9b43e189-43b7-4c00-a149-fee8236f2e22" (UID: "9b43e189-43b7-4c00-a149-fee8236f2e22"). InnerVolumeSpecName "kube-api-access-vsggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.659802 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9b43e189-43b7-4c00-a149-fee8236f2e22" (UID: "9b43e189-43b7-4c00-a149-fee8236f2e22"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.664426 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4518f8bc-7ce9-40ee-8b35-263609e549aa-kube-api-access-fj7gh" (OuterVolumeSpecName: "kube-api-access-fj7gh") pod "4518f8bc-7ce9-40ee-8b35-263609e549aa" (UID: "4518f8bc-7ce9-40ee-8b35-263609e549aa"). InnerVolumeSpecName "kube-api-access-fj7gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.713384 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0541f242-a3cd-490a-9e63-3f1278f05dc6" (UID: "0541f242-a3cd-490a-9e63-3f1278f05dc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.717833 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4518f8bc-7ce9-40ee-8b35-263609e549aa" (UID: "4518f8bc-7ce9-40ee-8b35-263609e549aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754783 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754831 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0541f242-a3cd-490a-9e63-3f1278f05dc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754841 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj7gh\" (UniqueName: \"kubernetes.io/projected/4518f8bc-7ce9-40ee-8b35-263609e549aa-kube-api-access-fj7gh\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754852 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b43e189-43b7-4c00-a149-fee8236f2e22-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754860 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsggw\" (UniqueName: \"kubernetes.io/projected/9b43e189-43b7-4c00-a149-fee8236f2e22-kube-api-access-vsggw\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754891 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754901 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4518f8bc-7ce9-40ee-8b35-263609e549aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.754910 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25355b2-4808-4605-a4a7-b51d677ad232-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.767863 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzvqm"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.850880 4680 generic.go:334] "Generic (PLEG): container finished" podID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerID="78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0" exitCode=0 Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.850966 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnkg" event={"ID":"692a260c-34fe-45b3-8ee0-1f438a630beb","Type":"ContainerDied","Data":"78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.850995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnkg" event={"ID":"692a260c-34fe-45b3-8ee0-1f438a630beb","Type":"ContainerDied","Data":"24e403ffbd69a0f2f9b331b806380daf42afa879f6cdc01ceb28c3b31703e26c"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.851012 4680 scope.go:117] "RemoveContainer" containerID="78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.851150 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnkg" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.861847 4680 generic.go:334] "Generic (PLEG): container finished" podID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerID="00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9" exitCode=0 Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.861964 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hlx8" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.861980 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hlx8" event={"ID":"4518f8bc-7ce9-40ee-8b35-263609e549aa","Type":"ContainerDied","Data":"00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.864121 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hlx8" event={"ID":"4518f8bc-7ce9-40ee-8b35-263609e549aa","Type":"ContainerDied","Data":"bc41e70dd3828a44cf65b84d1031f21030d909d654112e693d871043f2cf6b0a"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.866955 4680 generic.go:334] "Generic (PLEG): container finished" podID="a25355b2-4808-4605-a4a7-b51d677ad232" containerID="fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322" exitCode=0 Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.867084 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zsc" event={"ID":"a25355b2-4808-4605-a4a7-b51d677ad232","Type":"ContainerDied","Data":"fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.867119 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zsc" event={"ID":"a25355b2-4808-4605-a4a7-b51d677ad232","Type":"ContainerDied","Data":"6487b9845a58b322e761f9059a7f36265773e1c45483194d09db96c4371f091d"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.867268 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zsc" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.871338 4680 generic.go:334] "Generic (PLEG): container finished" podID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerID="f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad" exitCode=0 Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.871463 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" event={"ID":"9b43e189-43b7-4c00-a149-fee8236f2e22","Type":"ContainerDied","Data":"f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.871495 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" event={"ID":"9b43e189-43b7-4c00-a149-fee8236f2e22","Type":"ContainerDied","Data":"4c728fcdfe808e1a57007e39ad2b542007a74f571ed7ff8fbcd0fd1fe83c3fbd"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.871464 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vvkm4" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.875021 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" event={"ID":"5283315c-decc-4a61-aee5-74715a2f2393","Type":"ContainerStarted","Data":"7b26b4c04f4cdbf86cc776fe3063a290ba429cb81349f2ca5bfc8657ba724d6f"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.875178 4680 scope.go:117] "RemoveContainer" containerID="c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.877813 4680 generic.go:334] "Generic (PLEG): container finished" podID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerID="ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0" exitCode=0 Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.878010 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerDied","Data":"ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.878977 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s74p8" event={"ID":"0541f242-a3cd-490a-9e63-3f1278f05dc6","Type":"ContainerDied","Data":"a4b46eb7693c2f559f48680f33d6efe6f965e0b2dc39e1287ab8be5805bd7c93"} Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.878148 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s74p8" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.908020 4680 scope.go:117] "RemoveContainer" containerID="2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.909026 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hlx8"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.914879 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hlx8"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.947962 4680 scope.go:117] "RemoveContainer" containerID="78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.948172 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vvkm4"] Jan 26 16:11:36 crc kubenswrapper[4680]: E0126 16:11:36.948458 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0\": container with ID starting with 78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0 not found: ID does not exist" containerID="78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.948502 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0"} err="failed to get container status \"78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0\": rpc error: code = NotFound desc = could not find container \"78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0\": container with ID starting with 78b989168d9a7aa0d8f6bf3d7385b1155b3a523f1f3708a27a4f61b828de3ec0 not found: ID does not exist" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.948529 4680 scope.go:117] "RemoveContainer" containerID="c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6" Jan 26 16:11:36 crc kubenswrapper[4680]: E0126 16:11:36.948835 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6\": container with ID starting with c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6 not found: ID does not exist" containerID="c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.948861 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6"} err="failed to get container status \"c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6\": rpc error: code = NotFound desc = could not find container \"c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6\": container with ID starting with c7ab5d53e3da1a6bb012755b9d45313daa543f3a44406fc62108864a9c2723f6 not found: ID does not exist" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.948880 4680 scope.go:117] "RemoveContainer" containerID="2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185" Jan 26 16:11:36 crc kubenswrapper[4680]: E0126 16:11:36.949081 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185\": container with ID starting with 2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185 not found: ID does not exist" containerID="2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.949101 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185"} err="failed to get container status \"2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185\": rpc error: code = NotFound desc = could not find container \"2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185\": container with ID starting with 2ce1bcb853cbea36f0052170a871151213cdde75a9479167350ffdd12da6c185 not found: ID does not exist" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.949114 4680 scope.go:117] "RemoveContainer" containerID="00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.955811 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vvkm4"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.960310 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s74p8"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.966127 4680 scope.go:117] "RemoveContainer" containerID="c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.973240 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s74p8"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.980121 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsnkg"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.980255 4680 scope.go:117] "RemoveContainer" containerID="543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.983559 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsnkg"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.986613 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zsc"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.991205 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zsc"] Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.991526 4680 scope.go:117] "RemoveContainer" containerID="00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9" Jan 26 16:11:36 crc kubenswrapper[4680]: E0126 16:11:36.991840 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9\": container with ID starting with 00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9 not found: ID does not exist" containerID="00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.991927 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9"} err="failed to get container status \"00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9\": rpc error: code = NotFound desc = could not find container \"00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9\": container with ID starting with 00bc1ef193b19a6a7acd82d97fd95fb39e54fad76422e0e8949ac2c48d01a9c9 not found: ID does not exist" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.992048 4680 scope.go:117] "RemoveContainer" containerID="c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3" Jan 26 16:11:36 crc kubenswrapper[4680]: E0126 16:11:36.992258 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3\": container with ID starting with c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3 not found: ID does not exist" containerID="c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.992279 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3"} err="failed to get container status \"c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3\": rpc error: code = NotFound desc = could not find container \"c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3\": container with ID starting with c7ece4624835d3d7d215eb16ba6d41c4546a906d5bf71f1a2148877555a627f3 not found: ID does not exist" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.992290 4680 scope.go:117] "RemoveContainer" containerID="543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce" Jan 26 16:11:36 crc kubenswrapper[4680]: E0126 16:11:36.992918 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce\": container with ID starting with 543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce not found: ID does not exist" containerID="543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.992938 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce"} err="failed to get container status \"543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce\": rpc error: code = NotFound desc = could not find container \"543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce\": container with ID starting with 543498bf226d7746c8dde8ba64706914d4739daef25ed55d912193a04614b5ce not found: ID does not exist" Jan 26 16:11:36 crc kubenswrapper[4680]: I0126 16:11:36.992952 4680 scope.go:117] "RemoveContainer" containerID="fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.037121 4680 scope.go:117] "RemoveContainer" containerID="8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.053885 4680 scope.go:117] "RemoveContainer" containerID="0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.077016 4680 scope.go:117] "RemoveContainer" containerID="fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.077679 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322\": container with ID starting with fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322 not found: ID does not exist" containerID="fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.077706 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322"} err="failed to get container status \"fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322\": rpc error: code = NotFound desc = could not find container \"fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322\": container with ID starting with fc617be6d4fa768b950017cbe7422937cb1f4a7523e77840e8bd4cf96e6a1322 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.077729 4680 scope.go:117] "RemoveContainer" containerID="8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.078006 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696\": container with ID starting with 8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696 not found: ID does not exist" containerID="8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.078048 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696"} err="failed to get container status \"8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696\": rpc error: code = NotFound desc = could not find container \"8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696\": container with ID starting with 8ade9602de64712e992172ee1c9d012c8e16573e393cdd1e2e53fbab3f54d696 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.078130 4680 scope.go:117] "RemoveContainer" containerID="0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.078451 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707\": container with ID starting with 0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707 not found: ID does not exist" containerID="0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.078475 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707"} err="failed to get container status \"0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707\": rpc error: code = NotFound desc = could not find container \"0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707\": container with ID starting with 0a196dc92599b517ada8818a12fb8d3a01f3f239af6434324cca407d9f3da707 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.078491 4680 scope.go:117] "RemoveContainer" containerID="f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.096936 4680 scope.go:117] "RemoveContainer" containerID="512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.109196 4680 scope.go:117] "RemoveContainer" containerID="f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.109469 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad\": container with ID starting with f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad not found: ID does not exist" containerID="f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.109501 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad"} err="failed to get container status \"f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad\": rpc error: code = NotFound desc = could not find container \"f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad\": container with ID starting with f5738a798ac53d9faed495b9df48a6ece853b7a4c5f589bfd55c47eeea775fad not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.109523 4680 scope.go:117] "RemoveContainer" containerID="512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.109723 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198\": container with ID starting with 512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198 not found: ID does not exist" containerID="512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.109754 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198"} err="failed to get container status \"512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198\": rpc error: code = NotFound desc = could not find container \"512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198\": container with ID starting with 512daee33b2c179ed93b21e0185cb881dbfbf26aa9d174ccfaec7351c937d198 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.109766 4680 scope.go:117] "RemoveContainer" containerID="ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.126925 4680 scope.go:117] "RemoveContainer" containerID="3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.146935 4680 scope.go:117] "RemoveContainer" containerID="283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.166033 4680 scope.go:117] "RemoveContainer" containerID="ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.166607 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0\": container with ID starting with ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0 not found: ID does not exist" containerID="ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.166653 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0"} err="failed to get container status \"ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0\": rpc error: code = NotFound desc = could not find container \"ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0\": container with ID starting with ead9283acf3efc879e22fd089e92d44b427aae281daa04cfb9dbb06fdd9fe7c0 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.166694 4680 scope.go:117] "RemoveContainer" containerID="3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.167257 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718\": container with ID starting with 3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718 not found: ID does not exist" containerID="3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.167374 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718"} err="failed to get container status \"3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718\": rpc error: code = NotFound desc = could not find container \"3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718\": container with ID starting with 3cfeed8ab49daeaeef3f25723f8ee2355efd8857892631768574f528d9acb718 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.167476 4680 scope.go:117] "RemoveContainer" containerID="283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.167826 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f\": container with ID starting with 283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f not found: ID does not exist" containerID="283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.167862 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f"} err="failed to get container status \"283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f\": rpc error: code = NotFound desc = could not find container \"283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f\": container with ID starting with 283915d0dbfb257dcf802c2089c84f7c912c1bc0abeac771357fb5f864455f5f not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.176426 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" path="/var/lib/kubelet/pods/0541f242-a3cd-490a-9e63-3f1278f05dc6/volumes" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.177043 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" path="/var/lib/kubelet/pods/4518f8bc-7ce9-40ee-8b35-263609e549aa/volumes" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.177603 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" path="/var/lib/kubelet/pods/692a260c-34fe-45b3-8ee0-1f438a630beb/volumes" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.178988 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" path="/var/lib/kubelet/pods/9b43e189-43b7-4c00-a149-fee8236f2e22/volumes" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.180208 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" path="/var/lib/kubelet/pods/a25355b2-4808-4605-a4a7-b51d677ad232/volumes" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488478 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fxxq9"] Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488657 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488669 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488681 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488687 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488695 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488701 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488710 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488716 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488727 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488733 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488742 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488748 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488757 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488762 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488772 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488778 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488788 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488793 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488802 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488807 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488816 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488822 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="extract-utilities" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488830 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488835 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="extract-content" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.488841 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488847 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488924 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0541f242-a3cd-490a-9e63-3f1278f05dc6" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488932 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488950 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4518f8bc-7ce9-40ee-8b35-263609e549aa" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488956 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488967 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="692a260c-34fe-45b3-8ee0-1f438a630beb" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.488974 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25355b2-4808-4605-a4a7-b51d677ad232" containerName="registry-server" Jan 26 16:11:37 crc kubenswrapper[4680]: E0126 16:11:37.489063 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.489105 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b43e189-43b7-4c00-a149-fee8236f2e22" containerName="marketplace-operator" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.489703 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.491674 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.502783 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxxq9"] Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.563554 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-utilities\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.563664 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58d6\" (UniqueName: \"kubernetes.io/projected/7906ec80-b796-4c46-9867-cf61576a73b7-kube-api-access-j58d6\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.563685 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-catalog-content\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.665395 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-catalog-content\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.665452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58d6\" (UniqueName: \"kubernetes.io/projected/7906ec80-b796-4c46-9867-cf61576a73b7-kube-api-access-j58d6\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.665503 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-utilities\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.666115 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-utilities\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.666129 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-catalog-content\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.684890 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58d6\" (UniqueName: \"kubernetes.io/projected/7906ec80-b796-4c46-9867-cf61576a73b7-kube-api-access-j58d6\") pod \"redhat-marketplace-fxxq9\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.804717 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.890561 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" event={"ID":"5283315c-decc-4a61-aee5-74715a2f2393","Type":"ContainerStarted","Data":"07b4c123cf105eef391e17be1b89caa4ee50e36016b7a40007d89b789c27465b"} Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.892379 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.924549 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 16:11:37 crc kubenswrapper[4680]: I0126 16:11:37.943982 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podStartSLOduration=2.943968802 podStartE2EDuration="2.943968802s" podCreationTimestamp="2026-01-26 16:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:11:37.941490502 +0000 UTC m=+373.102762771" watchObservedRunningTime="2026-01-26 16:11:37.943968802 +0000 UTC m=+373.105241071" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.253458 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxxq9"] Jan 26 16:11:38 crc kubenswrapper[4680]: W0126 16:11:38.259900 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7906ec80_b796_4c46_9867_cf61576a73b7.slice/crio-4f50c6439be94d63b43218b71adfe5bbb95275892a7d3e27c58ff897c9c6e216 WatchSource:0}: Error finding container 4f50c6439be94d63b43218b71adfe5bbb95275892a7d3e27c58ff897c9c6e216: Status 404 returned error can't find the container with id 4f50c6439be94d63b43218b71adfe5bbb95275892a7d3e27c58ff897c9c6e216 Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.494611 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qzzv"] Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.495888 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.501325 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.509782 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qzzv"] Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.575106 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-utilities\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.575149 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qnz9\" (UniqueName: \"kubernetes.io/projected/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-kube-api-access-8qnz9\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.575186 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-catalog-content\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.676151 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-utilities\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.676201 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qnz9\" (UniqueName: \"kubernetes.io/projected/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-kube-api-access-8qnz9\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.676255 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-catalog-content\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.676688 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-utilities\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.676704 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-catalog-content\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.695574 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qnz9\" (UniqueName: \"kubernetes.io/projected/25be1e15-1f83-4ee7-82d4-9ffd6ff46f82-kube-api-access-8qnz9\") pod \"redhat-operators-5qzzv\" (UID: \"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82\") " pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.823764 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.904632 4680 generic.go:334] "Generic (PLEG): container finished" podID="7906ec80-b796-4c46-9867-cf61576a73b7" containerID="ddb6f77b2fe11d27e440c4663fd6684674643c5328563c970fa27b8eeadbf0c4" exitCode=0 Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.905667 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxxq9" event={"ID":"7906ec80-b796-4c46-9867-cf61576a73b7","Type":"ContainerDied","Data":"ddb6f77b2fe11d27e440c4663fd6684674643c5328563c970fa27b8eeadbf0c4"} Jan 26 16:11:38 crc kubenswrapper[4680]: I0126 16:11:38.905690 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxxq9" event={"ID":"7906ec80-b796-4c46-9867-cf61576a73b7","Type":"ContainerStarted","Data":"4f50c6439be94d63b43218b71adfe5bbb95275892a7d3e27c58ff897c9c6e216"} Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.257196 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qzzv"] Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.890604 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-42hb6"] Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.891822 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.894020 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.898826 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42hb6"] Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.913133 4680 generic.go:334] "Generic (PLEG): container finished" podID="7906ec80-b796-4c46-9867-cf61576a73b7" containerID="6279456bdce49f8c09f230c770944ff2c95f77bc7700dc830e1e9ea32b0e6c6d" exitCode=0 Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.913222 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxxq9" event={"ID":"7906ec80-b796-4c46-9867-cf61576a73b7","Type":"ContainerDied","Data":"6279456bdce49f8c09f230c770944ff2c95f77bc7700dc830e1e9ea32b0e6c6d"} Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.923285 4680 generic.go:334] "Generic (PLEG): container finished" podID="25be1e15-1f83-4ee7-82d4-9ffd6ff46f82" containerID="9641a227305f2cab580cbed0996119c3ea3c6ab1a985c9e7d6a93161d5e156fd" exitCode=0 Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.923425 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzzv" event={"ID":"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82","Type":"ContainerDied","Data":"9641a227305f2cab580cbed0996119c3ea3c6ab1a985c9e7d6a93161d5e156fd"} Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.924024 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzzv" event={"ID":"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82","Type":"ContainerStarted","Data":"69c9f7968194fbaa7bc45e458f91ad8c667efe48b56e06a0c562ac8ae2cf4420"} Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.996107 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1f81-6236-4d81-a220-72d5bb914144-utilities\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.996209 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1f81-6236-4d81-a220-72d5bb914144-catalog-content\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:39 crc kubenswrapper[4680]: I0126 16:11:39.996246 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2z8\" (UniqueName: \"kubernetes.io/projected/132d1f81-6236-4d81-a220-72d5bb914144-kube-api-access-cf2z8\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.097443 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1f81-6236-4d81-a220-72d5bb914144-utilities\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.097531 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1f81-6236-4d81-a220-72d5bb914144-catalog-content\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.097568 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf2z8\" (UniqueName: \"kubernetes.io/projected/132d1f81-6236-4d81-a220-72d5bb914144-kube-api-access-cf2z8\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.098028 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1f81-6236-4d81-a220-72d5bb914144-utilities\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.098128 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1f81-6236-4d81-a220-72d5bb914144-catalog-content\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.119822 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf2z8\" (UniqueName: \"kubernetes.io/projected/132d1f81-6236-4d81-a220-72d5bb914144-kube-api-access-cf2z8\") pod \"community-operators-42hb6\" (UID: \"132d1f81-6236-4d81-a220-72d5bb914144\") " pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.218202 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.632266 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42hb6"] Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.892536 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cs6qq"] Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.893673 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.895926 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.904524 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64c6v\" (UniqueName: \"kubernetes.io/projected/b400e8fe-6116-4179-8aa0-7e697c9671bd-kube-api-access-64c6v\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.904586 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-utilities\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.904614 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-catalog-content\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.912399 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cs6qq"] Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.938747 4680 generic.go:334] "Generic (PLEG): container finished" podID="132d1f81-6236-4d81-a220-72d5bb914144" containerID="01cd1a7e89cca7eaab3cf82e2d0ef413932031b9038b3f7a92e05df638165c9d" exitCode=0 Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.938780 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42hb6" event={"ID":"132d1f81-6236-4d81-a220-72d5bb914144","Type":"ContainerDied","Data":"01cd1a7e89cca7eaab3cf82e2d0ef413932031b9038b3f7a92e05df638165c9d"} Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.938812 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42hb6" event={"ID":"132d1f81-6236-4d81-a220-72d5bb914144","Type":"ContainerStarted","Data":"e4ef40f34b6559f415f785ca53738dc17e87f15aec23810dd421206f19b71aef"} Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.944241 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxxq9" event={"ID":"7906ec80-b796-4c46-9867-cf61576a73b7","Type":"ContainerStarted","Data":"47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9"} Jan 26 16:11:40 crc kubenswrapper[4680]: I0126 16:11:40.976819 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fxxq9" podStartSLOduration=2.332935358 podStartE2EDuration="3.976805193s" podCreationTimestamp="2026-01-26 16:11:37 +0000 UTC" firstStartedPulling="2026-01-26 16:11:38.909344949 +0000 UTC m=+374.070617218" lastFinishedPulling="2026-01-26 16:11:40.553214784 +0000 UTC m=+375.714487053" observedRunningTime="2026-01-26 16:11:40.976387331 +0000 UTC m=+376.137659600" watchObservedRunningTime="2026-01-26 16:11:40.976805193 +0000 UTC m=+376.138077462" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.005693 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-utilities\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.005749 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-catalog-content\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.005801 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64c6v\" (UniqueName: \"kubernetes.io/projected/b400e8fe-6116-4179-8aa0-7e697c9671bd-kube-api-access-64c6v\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.006287 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-catalog-content\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.006292 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-utilities\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.025671 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64c6v\" (UniqueName: \"kubernetes.io/projected/b400e8fe-6116-4179-8aa0-7e697c9671bd-kube-api-access-64c6v\") pod \"certified-operators-cs6qq\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.208260 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.642124 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cs6qq"] Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.950063 4680 generic.go:334] "Generic (PLEG): container finished" podID="132d1f81-6236-4d81-a220-72d5bb914144" containerID="864a1acb986af4690e32f385521d570db34f2730259669d0feaedb46f43edffa" exitCode=0 Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.950104 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42hb6" event={"ID":"132d1f81-6236-4d81-a220-72d5bb914144","Type":"ContainerDied","Data":"864a1acb986af4690e32f385521d570db34f2730259669d0feaedb46f43edffa"} Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.952494 4680 generic.go:334] "Generic (PLEG): container finished" podID="25be1e15-1f83-4ee7-82d4-9ffd6ff46f82" containerID="d031b134d6340d893e32d55daf3c7be8c0cc88b86d619e5139957b06127c2343" exitCode=0 Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.952548 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzzv" event={"ID":"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82","Type":"ContainerDied","Data":"d031b134d6340d893e32d55daf3c7be8c0cc88b86d619e5139957b06127c2343"} Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.953992 4680 generic.go:334] "Generic (PLEG): container finished" podID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerID="fb44feba061ab8e0da977f89b504e2aa2044042386b8b6a553ae877a22b4f774" exitCode=0 Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.955388 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs6qq" event={"ID":"b400e8fe-6116-4179-8aa0-7e697c9671bd","Type":"ContainerDied","Data":"fb44feba061ab8e0da977f89b504e2aa2044042386b8b6a553ae877a22b4f774"} Jan 26 16:11:41 crc kubenswrapper[4680]: I0126 16:11:41.955409 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs6qq" event={"ID":"b400e8fe-6116-4179-8aa0-7e697c9671bd","Type":"ContainerStarted","Data":"9e468e0965cacb9c2b8916d1ab2ac225a92c82bbb1083a69fa3b8d899cd07dec"} Jan 26 16:11:42 crc kubenswrapper[4680]: I0126 16:11:42.961281 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzzv" event={"ID":"25be1e15-1f83-4ee7-82d4-9ffd6ff46f82","Type":"ContainerStarted","Data":"36896dd4d4e22bae24cb97c0b4833ab4412b8a1727420929c328c98c68a91f0c"} Jan 26 16:11:42 crc kubenswrapper[4680]: I0126 16:11:42.964328 4680 generic.go:334] "Generic (PLEG): container finished" podID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerID="b7ae92a0d5339df2eb2eaa004492b583b4d7018cee07ad2cceb34cd3fa54fca4" exitCode=0 Jan 26 16:11:42 crc kubenswrapper[4680]: I0126 16:11:42.964393 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs6qq" event={"ID":"b400e8fe-6116-4179-8aa0-7e697c9671bd","Type":"ContainerDied","Data":"b7ae92a0d5339df2eb2eaa004492b583b4d7018cee07ad2cceb34cd3fa54fca4"} Jan 26 16:11:42 crc kubenswrapper[4680]: I0126 16:11:42.971381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42hb6" event={"ID":"132d1f81-6236-4d81-a220-72d5bb914144","Type":"ContainerStarted","Data":"a4ddc02f64e7c7f97b357c08215547b73bb145ba8a7a08a8639104859856465b"} Jan 26 16:11:42 crc kubenswrapper[4680]: I0126 16:11:42.981135 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qzzv" podStartSLOduration=2.5533802679999997 podStartE2EDuration="4.981120959s" podCreationTimestamp="2026-01-26 16:11:38 +0000 UTC" firstStartedPulling="2026-01-26 16:11:39.924536225 +0000 UTC m=+375.085808494" lastFinishedPulling="2026-01-26 16:11:42.352276916 +0000 UTC m=+377.513549185" observedRunningTime="2026-01-26 16:11:42.979624017 +0000 UTC m=+378.140896286" watchObservedRunningTime="2026-01-26 16:11:42.981120959 +0000 UTC m=+378.142393228" Jan 26 16:11:42 crc kubenswrapper[4680]: I0126 16:11:42.998009 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-42hb6" podStartSLOduration=2.62062296 podStartE2EDuration="3.997985057s" podCreationTimestamp="2026-01-26 16:11:39 +0000 UTC" firstStartedPulling="2026-01-26 16:11:40.940210596 +0000 UTC m=+376.101482865" lastFinishedPulling="2026-01-26 16:11:42.317572693 +0000 UTC m=+377.478844962" observedRunningTime="2026-01-26 16:11:42.994974662 +0000 UTC m=+378.156246931" watchObservedRunningTime="2026-01-26 16:11:42.997985057 +0000 UTC m=+378.159257326" Jan 26 16:11:43 crc kubenswrapper[4680]: I0126 16:11:43.978256 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs6qq" event={"ID":"b400e8fe-6116-4179-8aa0-7e697c9671bd","Type":"ContainerStarted","Data":"af343a459dd6142a2fdf19814a189026ab0adc17b15060f13c3f6ec659770ed4"} Jan 26 16:11:46 crc kubenswrapper[4680]: I0126 16:11:46.981119 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:11:46 crc kubenswrapper[4680]: I0126 16:11:46.981595 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:11:47 crc kubenswrapper[4680]: I0126 16:11:47.804923 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:47 crc kubenswrapper[4680]: I0126 16:11:47.805008 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:48 crc kubenswrapper[4680]: I0126 16:11:48.052366 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:48 crc kubenswrapper[4680]: I0126 16:11:48.073036 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cs6qq" podStartSLOduration=6.467224559 podStartE2EDuration="8.073018315s" podCreationTimestamp="2026-01-26 16:11:40 +0000 UTC" firstStartedPulling="2026-01-26 16:11:41.957587096 +0000 UTC m=+377.118859365" lastFinishedPulling="2026-01-26 16:11:43.563380852 +0000 UTC m=+378.724653121" observedRunningTime="2026-01-26 16:11:43.993724303 +0000 UTC m=+379.154996562" watchObservedRunningTime="2026-01-26 16:11:48.073018315 +0000 UTC m=+383.234290584" Jan 26 16:11:48 crc kubenswrapper[4680]: I0126 16:11:48.090869 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 16:11:48 crc kubenswrapper[4680]: I0126 16:11:48.824857 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:48 crc kubenswrapper[4680]: I0126 16:11:48.825342 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:48 crc kubenswrapper[4680]: I0126 16:11:48.885139 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:49 crc kubenswrapper[4680]: I0126 16:11:49.038958 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qzzv" Jan 26 16:11:50 crc kubenswrapper[4680]: I0126 16:11:50.219013 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:50 crc kubenswrapper[4680]: I0126 16:11:50.219117 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:50 crc kubenswrapper[4680]: I0126 16:11:50.274960 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:51 crc kubenswrapper[4680]: I0126 16:11:51.045478 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-42hb6" Jan 26 16:11:51 crc kubenswrapper[4680]: I0126 16:11:51.209403 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:51 crc kubenswrapper[4680]: I0126 16:11:51.209446 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:51 crc kubenswrapper[4680]: I0126 16:11:51.249299 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:11:52 crc kubenswrapper[4680]: I0126 16:11:52.058409 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 16:12:16 crc kubenswrapper[4680]: I0126 16:12:16.980922 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:12:16 crc kubenswrapper[4680]: I0126 16:12:16.981422 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:12:16 crc kubenswrapper[4680]: I0126 16:12:16.981469 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:12:16 crc kubenswrapper[4680]: I0126 16:12:16.982033 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d7aa8956dd19c2869b6fc368d57ebe2297b26fb6365b63a13635d09cdc7a2f9"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:12:16 crc kubenswrapper[4680]: I0126 16:12:16.982118 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://4d7aa8956dd19c2869b6fc368d57ebe2297b26fb6365b63a13635d09cdc7a2f9" gracePeriod=600 Jan 26 16:12:17 crc kubenswrapper[4680]: I0126 16:12:17.168382 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="4d7aa8956dd19c2869b6fc368d57ebe2297b26fb6365b63a13635d09cdc7a2f9" exitCode=0 Jan 26 16:12:17 crc kubenswrapper[4680]: I0126 16:12:17.168445 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"4d7aa8956dd19c2869b6fc368d57ebe2297b26fb6365b63a13635d09cdc7a2f9"} Jan 26 16:12:17 crc kubenswrapper[4680]: I0126 16:12:17.168501 4680 scope.go:117] "RemoveContainer" containerID="3baf0c83b85722f97e3fc3725e61a68ce12d0e3053aed00240bdc2f6394cba47" Jan 26 16:12:18 crc kubenswrapper[4680]: I0126 16:12:18.176037 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"b59ba864e8ff8bde338fb3aa885ce44dea0664465957bc0e21e513197f5844ec"} Jan 26 16:14:46 crc kubenswrapper[4680]: I0126 16:14:46.981008 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:14:46 crc kubenswrapper[4680]: I0126 16:14:46.981708 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.191843 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv"] Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.193324 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.195591 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.195753 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.206844 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv"] Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.304986 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfdcd66-b3be-4e6f-9700-91414b2926be-secret-volume\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.305237 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt44\" (UniqueName: \"kubernetes.io/projected/4bfdcd66-b3be-4e6f-9700-91414b2926be-kube-api-access-9lt44\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.305314 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfdcd66-b3be-4e6f-9700-91414b2926be-config-volume\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.406441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfdcd66-b3be-4e6f-9700-91414b2926be-config-volume\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.406575 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfdcd66-b3be-4e6f-9700-91414b2926be-secret-volume\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.406602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt44\" (UniqueName: \"kubernetes.io/projected/4bfdcd66-b3be-4e6f-9700-91414b2926be-kube-api-access-9lt44\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.407872 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfdcd66-b3be-4e6f-9700-91414b2926be-config-volume\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.412087 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfdcd66-b3be-4e6f-9700-91414b2926be-secret-volume\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.424706 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt44\" (UniqueName: \"kubernetes.io/projected/4bfdcd66-b3be-4e6f-9700-91414b2926be-kube-api-access-9lt44\") pod \"collect-profiles-29490735-6wwcv\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.509737 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:00 crc kubenswrapper[4680]: I0126 16:15:00.738463 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv"] Jan 26 16:15:01 crc kubenswrapper[4680]: I0126 16:15:01.096437 4680 generic.go:334] "Generic (PLEG): container finished" podID="4bfdcd66-b3be-4e6f-9700-91414b2926be" containerID="2555271667f61abf5c36e4fe8ca6d8c1531e5ed19463e47954f6e1006eba917e" exitCode=0 Jan 26 16:15:01 crc kubenswrapper[4680]: I0126 16:15:01.096476 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" event={"ID":"4bfdcd66-b3be-4e6f-9700-91414b2926be","Type":"ContainerDied","Data":"2555271667f61abf5c36e4fe8ca6d8c1531e5ed19463e47954f6e1006eba917e"} Jan 26 16:15:01 crc kubenswrapper[4680]: I0126 16:15:01.096499 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" event={"ID":"4bfdcd66-b3be-4e6f-9700-91414b2926be","Type":"ContainerStarted","Data":"b79a8197f78982256c47b732a03aced0d2348148432935aced56dcabbcc5ea5b"} Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.319873 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.427639 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfdcd66-b3be-4e6f-9700-91414b2926be-config-volume\") pod \"4bfdcd66-b3be-4e6f-9700-91414b2926be\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.427706 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lt44\" (UniqueName: \"kubernetes.io/projected/4bfdcd66-b3be-4e6f-9700-91414b2926be-kube-api-access-9lt44\") pod \"4bfdcd66-b3be-4e6f-9700-91414b2926be\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.427733 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfdcd66-b3be-4e6f-9700-91414b2926be-secret-volume\") pod \"4bfdcd66-b3be-4e6f-9700-91414b2926be\" (UID: \"4bfdcd66-b3be-4e6f-9700-91414b2926be\") " Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.428852 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfdcd66-b3be-4e6f-9700-91414b2926be-config-volume" (OuterVolumeSpecName: "config-volume") pod "4bfdcd66-b3be-4e6f-9700-91414b2926be" (UID: "4bfdcd66-b3be-4e6f-9700-91414b2926be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.433214 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfdcd66-b3be-4e6f-9700-91414b2926be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4bfdcd66-b3be-4e6f-9700-91414b2926be" (UID: "4bfdcd66-b3be-4e6f-9700-91414b2926be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.439318 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfdcd66-b3be-4e6f-9700-91414b2926be-kube-api-access-9lt44" (OuterVolumeSpecName: "kube-api-access-9lt44") pod "4bfdcd66-b3be-4e6f-9700-91414b2926be" (UID: "4bfdcd66-b3be-4e6f-9700-91414b2926be"). InnerVolumeSpecName "kube-api-access-9lt44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.529941 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfdcd66-b3be-4e6f-9700-91414b2926be-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.530012 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lt44\" (UniqueName: \"kubernetes.io/projected/4bfdcd66-b3be-4e6f-9700-91414b2926be-kube-api-access-9lt44\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:02 crc kubenswrapper[4680]: I0126 16:15:02.530041 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfdcd66-b3be-4e6f-9700-91414b2926be-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4680]: I0126 16:15:03.113160 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" event={"ID":"4bfdcd66-b3be-4e6f-9700-91414b2926be","Type":"ContainerDied","Data":"b79a8197f78982256c47b732a03aced0d2348148432935aced56dcabbcc5ea5b"} Jan 26 16:15:03 crc kubenswrapper[4680]: I0126 16:15:03.113215 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b79a8197f78982256c47b732a03aced0d2348148432935aced56dcabbcc5ea5b" Jan 26 16:15:03 crc kubenswrapper[4680]: I0126 16:15:03.113300 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv" Jan 26 16:15:16 crc kubenswrapper[4680]: I0126 16:15:16.981189 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:15:16 crc kubenswrapper[4680]: I0126 16:15:16.981773 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:15:46 crc kubenswrapper[4680]: I0126 16:15:46.981324 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:15:46 crc kubenswrapper[4680]: I0126 16:15:46.981853 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:15:46 crc kubenswrapper[4680]: I0126 16:15:46.981927 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:15:46 crc kubenswrapper[4680]: I0126 16:15:46.982749 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b59ba864e8ff8bde338fb3aa885ce44dea0664465957bc0e21e513197f5844ec"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:15:46 crc kubenswrapper[4680]: I0126 16:15:46.982839 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://b59ba864e8ff8bde338fb3aa885ce44dea0664465957bc0e21e513197f5844ec" gracePeriod=600 Jan 26 16:15:47 crc kubenswrapper[4680]: I0126 16:15:47.389526 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="b59ba864e8ff8bde338fb3aa885ce44dea0664465957bc0e21e513197f5844ec" exitCode=0 Jan 26 16:15:47 crc kubenswrapper[4680]: I0126 16:15:47.389739 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"b59ba864e8ff8bde338fb3aa885ce44dea0664465957bc0e21e513197f5844ec"} Jan 26 16:15:47 crc kubenswrapper[4680]: I0126 16:15:47.389870 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"e6ea51382c2431c8381beef85985fd3da79a05f0dd4a6e879c92eee56a2edc94"} Jan 26 16:15:47 crc kubenswrapper[4680]: I0126 16:15:47.389895 4680 scope.go:117] "RemoveContainer" containerID="4d7aa8956dd19c2869b6fc368d57ebe2297b26fb6365b63a13635d09cdc7a2f9" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.163046 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j"] Jan 26 16:16:53 crc kubenswrapper[4680]: E0126 16:16:53.164202 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfdcd66-b3be-4e6f-9700-91414b2926be" containerName="collect-profiles" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.164217 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfdcd66-b3be-4e6f-9700-91414b2926be" containerName="collect-profiles" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.164312 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfdcd66-b3be-4e6f-9700-91414b2926be" containerName="collect-profiles" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.164719 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.169950 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.170680 4680 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-crj8q" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.178473 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.180086 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j"] Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.181925 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv77l\" (UniqueName: \"kubernetes.io/projected/7ada9d5c-4b8d-41fe-a413-577bd303d42b-kube-api-access-qv77l\") pod \"cert-manager-cainjector-cf98fcc89-xpw6j\" (UID: \"7ada9d5c-4b8d-41fe-a413-577bd303d42b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.184762 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-z75cc"] Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.185427 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z75cc" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.190679 4680 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vwwbn" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.198337 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z75cc"] Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.203457 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tlm4s"] Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.204114 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.208567 4680 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tvj94" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.222119 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tlm4s"] Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.283018 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv77l\" (UniqueName: \"kubernetes.io/projected/7ada9d5c-4b8d-41fe-a413-577bd303d42b-kube-api-access-qv77l\") pod \"cert-manager-cainjector-cf98fcc89-xpw6j\" (UID: \"7ada9d5c-4b8d-41fe-a413-577bd303d42b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.301321 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv77l\" (UniqueName: \"kubernetes.io/projected/7ada9d5c-4b8d-41fe-a413-577bd303d42b-kube-api-access-qv77l\") pod \"cert-manager-cainjector-cf98fcc89-xpw6j\" (UID: \"7ada9d5c-4b8d-41fe-a413-577bd303d42b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.383970 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwmh\" (UniqueName: \"kubernetes.io/projected/c0e0de34-8f98-4db6-abf2-856f6477119e-kube-api-access-vpwmh\") pod \"cert-manager-webhook-687f57d79b-tlm4s\" (UID: \"c0e0de34-8f98-4db6-abf2-856f6477119e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.384502 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4w6\" (UniqueName: \"kubernetes.io/projected/509af944-2b66-4364-80f2-0793e8b0f93f-kube-api-access-wp4w6\") pod \"cert-manager-858654f9db-z75cc\" (UID: \"509af944-2b66-4364-80f2-0793e8b0f93f\") " pod="cert-manager/cert-manager-858654f9db-z75cc" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.479457 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.486934 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwmh\" (UniqueName: \"kubernetes.io/projected/c0e0de34-8f98-4db6-abf2-856f6477119e-kube-api-access-vpwmh\") pod \"cert-manager-webhook-687f57d79b-tlm4s\" (UID: \"c0e0de34-8f98-4db6-abf2-856f6477119e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.487017 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp4w6\" (UniqueName: \"kubernetes.io/projected/509af944-2b66-4364-80f2-0793e8b0f93f-kube-api-access-wp4w6\") pod \"cert-manager-858654f9db-z75cc\" (UID: \"509af944-2b66-4364-80f2-0793e8b0f93f\") " pod="cert-manager/cert-manager-858654f9db-z75cc" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.511860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwmh\" (UniqueName: \"kubernetes.io/projected/c0e0de34-8f98-4db6-abf2-856f6477119e-kube-api-access-vpwmh\") pod \"cert-manager-webhook-687f57d79b-tlm4s\" (UID: \"c0e0de34-8f98-4db6-abf2-856f6477119e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.512935 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp4w6\" (UniqueName: \"kubernetes.io/projected/509af944-2b66-4364-80f2-0793e8b0f93f-kube-api-access-wp4w6\") pod \"cert-manager-858654f9db-z75cc\" (UID: \"509af944-2b66-4364-80f2-0793e8b0f93f\") " pod="cert-manager/cert-manager-858654f9db-z75cc" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.522261 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.747018 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tlm4s"] Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.755283 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.771451 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" event={"ID":"c0e0de34-8f98-4db6-abf2-856f6477119e","Type":"ContainerStarted","Data":"7e89741168a3e3ca0107baff406478c75815f02e417e2e53c225a0f3fe452fa0"} Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.784287 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j"] Jan 26 16:16:53 crc kubenswrapper[4680]: W0126 16:16:53.784344 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ada9d5c_4b8d_41fe_a413_577bd303d42b.slice/crio-040a18bf9b4da5cbeac18ea4beb64fb8473806648034ca67ae8a487342f621df WatchSource:0}: Error finding container 040a18bf9b4da5cbeac18ea4beb64fb8473806648034ca67ae8a487342f621df: Status 404 returned error can't find the container with id 040a18bf9b4da5cbeac18ea4beb64fb8473806648034ca67ae8a487342f621df Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.803547 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z75cc" Jan 26 16:16:53 crc kubenswrapper[4680]: I0126 16:16:53.963271 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z75cc"] Jan 26 16:16:53 crc kubenswrapper[4680]: W0126 16:16:53.970769 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod509af944_2b66_4364_80f2_0793e8b0f93f.slice/crio-ab0ee1fbc7619e3778067365cf1b6219c6916fe218c0bfaccfc4f7405c4dcc19 WatchSource:0}: Error finding container ab0ee1fbc7619e3778067365cf1b6219c6916fe218c0bfaccfc4f7405c4dcc19: Status 404 returned error can't find the container with id ab0ee1fbc7619e3778067365cf1b6219c6916fe218c0bfaccfc4f7405c4dcc19 Jan 26 16:16:54 crc kubenswrapper[4680]: I0126 16:16:54.778103 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" event={"ID":"7ada9d5c-4b8d-41fe-a413-577bd303d42b","Type":"ContainerStarted","Data":"040a18bf9b4da5cbeac18ea4beb64fb8473806648034ca67ae8a487342f621df"} Jan 26 16:16:54 crc kubenswrapper[4680]: I0126 16:16:54.779474 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z75cc" event={"ID":"509af944-2b66-4364-80f2-0793e8b0f93f","Type":"ContainerStarted","Data":"ab0ee1fbc7619e3778067365cf1b6219c6916fe218c0bfaccfc4f7405c4dcc19"} Jan 26 16:16:59 crc kubenswrapper[4680]: I0126 16:16:59.806465 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" event={"ID":"c0e0de34-8f98-4db6-abf2-856f6477119e","Type":"ContainerStarted","Data":"b28253ef5b316b6e61a714820c75f3906c5d0bb1a57e6b80965707c9f82110a1"} Jan 26 16:16:59 crc kubenswrapper[4680]: I0126 16:16:59.806806 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:16:59 crc kubenswrapper[4680]: I0126 16:16:59.810083 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" event={"ID":"7ada9d5c-4b8d-41fe-a413-577bd303d42b","Type":"ContainerStarted","Data":"c2f233696cc63487448d66b5c3b0cda6b8fff98e63d0625d97971c81b47b86e1"} Jan 26 16:16:59 crc kubenswrapper[4680]: I0126 16:16:59.820397 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" podStartSLOduration=1.312976644 podStartE2EDuration="6.820379131s" podCreationTimestamp="2026-01-26 16:16:53 +0000 UTC" firstStartedPulling="2026-01-26 16:16:53.754566431 +0000 UTC m=+688.915838700" lastFinishedPulling="2026-01-26 16:16:59.261968918 +0000 UTC m=+694.423241187" observedRunningTime="2026-01-26 16:16:59.818234799 +0000 UTC m=+694.979507068" watchObservedRunningTime="2026-01-26 16:16:59.820379131 +0000 UTC m=+694.981651400" Jan 26 16:16:59 crc kubenswrapper[4680]: I0126 16:16:59.856485 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpw6j" podStartSLOduration=1.4343056029999999 podStartE2EDuration="6.856467104s" podCreationTimestamp="2026-01-26 16:16:53 +0000 UTC" firstStartedPulling="2026-01-26 16:16:53.786150165 +0000 UTC m=+688.947422434" lastFinishedPulling="2026-01-26 16:16:59.208311666 +0000 UTC m=+694.369583935" observedRunningTime="2026-01-26 16:16:59.839738241 +0000 UTC m=+695.001010520" watchObservedRunningTime="2026-01-26 16:16:59.856467104 +0000 UTC m=+695.017739373" Jan 26 16:17:00 crc kubenswrapper[4680]: I0126 16:17:00.814712 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z75cc" event={"ID":"509af944-2b66-4364-80f2-0793e8b0f93f","Type":"ContainerStarted","Data":"8b75b302e563b3e4cdf71e355261e3590b9c31e587c915c3df20aae9bef84722"} Jan 26 16:17:00 crc kubenswrapper[4680]: I0126 16:17:00.833420 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-z75cc" podStartSLOduration=1.264625325 podStartE2EDuration="7.833401613s" podCreationTimestamp="2026-01-26 16:16:53 +0000 UTC" firstStartedPulling="2026-01-26 16:16:53.973210856 +0000 UTC m=+689.134483125" lastFinishedPulling="2026-01-26 16:17:00.541987144 +0000 UTC m=+695.703259413" observedRunningTime="2026-01-26 16:17:00.832113206 +0000 UTC m=+695.993385475" watchObservedRunningTime="2026-01-26 16:17:00.833401613 +0000 UTC m=+695.994673882" Jan 26 16:17:08 crc kubenswrapper[4680]: I0126 16:17:08.525980 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.757760 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j2vl"] Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759559 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-controller" containerID="cri-o://bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759625 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="nbdb" containerID="cri-o://a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759651 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759673 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="northd" containerID="cri-o://762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759684 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-node" containerID="cri-o://c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759696 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-acl-logging" containerID="cri-o://489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.759710 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="sbdb" containerID="cri-o://a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.811836 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" containerID="cri-o://f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" gracePeriod=30 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.900881 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/3.log" Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.903040 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovn-acl-logging/0.log" Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.903839 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" exitCode=0 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.903864 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" exitCode=143 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.903909 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.903937 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.905433 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/2.log" Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.906544 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/1.log" Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.906575 4680 generic.go:334] "Generic (PLEG): container finished" podID="9ac04312-7b74-4193-9b93-b54b91bab69b" containerID="5565421e31d49f8991f452086b11b6115325b4ee38798808abf5c24b9ff73504" exitCode=2 Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.906597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerDied","Data":"5565421e31d49f8991f452086b11b6115325b4ee38798808abf5c24b9ff73504"} Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.906623 4680 scope.go:117] "RemoveContainer" containerID="baa5467e6ec62ef5c28fd65e36cc229bc3fb1b58e53d2dfe123869ab134c4d81" Jan 26 16:17:15 crc kubenswrapper[4680]: I0126 16:17:15.907205 4680 scope.go:117] "RemoveContainer" containerID="5565421e31d49f8991f452086b11b6115325b4ee38798808abf5c24b9ff73504" Jan 26 16:17:15 crc kubenswrapper[4680]: E0126 16:17:15.907391 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqgn2_openshift-multus(9ac04312-7b74-4193-9b93-b54b91bab69b)\"" pod="openshift-multus/multus-lqgn2" podUID="9ac04312-7b74-4193-9b93-b54b91bab69b" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.119917 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/3.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.123021 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovn-acl-logging/0.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.123548 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovn-controller/0.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.124862 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168691 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zbpn8"] Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168872 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168883 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168891 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168899 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168906 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kubecfg-setup" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168912 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kubecfg-setup" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168922 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="nbdb" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168928 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="nbdb" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168939 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168945 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168954 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-acl-logging" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168960 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-acl-logging" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168968 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168975 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168982 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.168988 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.168997 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="northd" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169003 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="northd" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.169013 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="sbdb" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169020 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="sbdb" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.169027 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169032 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.169043 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-node" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169048 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-node" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169144 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169152 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169162 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-acl-logging" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169171 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169177 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169185 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="sbdb" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169192 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="kube-rbac-proxy-node" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169198 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169206 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovn-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169213 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="northd" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169223 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="nbdb" Jan 26 16:17:16 crc kubenswrapper[4680]: E0126 16:17:16.169305 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169311 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.169388 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerName="ovnkube-controller" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.171156 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318382 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-log-socket\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318702 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxrq\" (UniqueName: \"kubernetes.io/projected/f8b202a9-2dd7-4e9d-a072-c51433d3596f-kube-api-access-vtxrq\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318730 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-netd\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318752 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-openvswitch\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318773 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-var-lib-openvswitch\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318814 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-etc-openvswitch\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318842 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-node-log\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318886 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-env-overrides\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318913 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-script-lib\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318932 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-systemd-units\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318956 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-kubelet\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.318981 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovn-node-metrics-cert\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319003 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-systemd\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319022 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-ovn-kubernetes\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-config\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319107 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319133 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-slash\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319156 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-bin\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319203 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-ovn\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319229 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-netns\") pod \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\" (UID: \"f8b202a9-2dd7-4e9d-a072-c51433d3596f\") " Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319368 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-kubelet\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319393 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-cni-netd\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319418 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-var-lib-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319447 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt87\" (UniqueName: \"kubernetes.io/projected/8bc5951e-2f18-4454-9de8-03a295fe8e1a-kube-api-access-5xt87\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319470 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-ovn\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319490 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovnkube-config\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319512 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-cni-bin\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319561 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-log-socket\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319584 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-systemd-units\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319618 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-env-overrides\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovn-node-metrics-cert\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319683 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-slash\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319701 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-node-log\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319731 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-etc-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319754 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319776 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovnkube-script-lib\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319779 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319799 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-systemd\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319830 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319860 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319877 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319887 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319893 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319917 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319909 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-log-socket" (OuterVolumeSpecName: "log-socket") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319929 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319939 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-run-netns\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319947 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.319971 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320001 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-node-log" (OuterVolumeSpecName: "node-log") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320032 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320351 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320448 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320491 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320527 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-slash" (OuterVolumeSpecName: "host-slash") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320657 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320774 4680 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320792 4680 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320804 4680 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320815 4680 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320829 4680 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320842 4680 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320853 4680 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320866 4680 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320877 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320890 4680 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320901 4680 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320913 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320927 4680 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320939 4680 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320950 4680 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.320962 4680 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.325349 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b202a9-2dd7-4e9d-a072-c51433d3596f-kube-api-access-vtxrq" (OuterVolumeSpecName: "kube-api-access-vtxrq") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "kube-api-access-vtxrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.329579 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.336283 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f8b202a9-2dd7-4e9d-a072-c51433d3596f" (UID: "f8b202a9-2dd7-4e9d-a072-c51433d3596f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422114 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovn-node-metrics-cert\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422197 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-slash\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422227 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-node-log\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-etc-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422274 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-slash\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422339 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovnkube-script-lib\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422349 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-etc-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422373 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-systemd\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-node-log\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422605 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422677 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-run-netns\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422776 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-kubelet\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422823 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-cni-netd\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422886 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-var-lib-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.422950 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xt87\" (UniqueName: \"kubernetes.io/projected/8bc5951e-2f18-4454-9de8-03a295fe8e1a-kube-api-access-5xt87\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423009 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-ovn\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423054 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovnkube-config\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423177 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-cni-netd\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423187 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423262 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423294 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-cni-bin\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423334 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-var-lib-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423341 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-log-socket\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423378 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-log-socket\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423418 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-openvswitch\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423416 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovnkube-script-lib\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423455 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-systemd-units\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423462 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-kubelet\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423481 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-cni-bin\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423533 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-run-netns\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423568 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-ovn\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-env-overrides\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-run-systemd\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.423934 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxrq\" (UniqueName: \"kubernetes.io/projected/f8b202a9-2dd7-4e9d-a072-c51433d3596f-kube-api-access-vtxrq\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424015 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424046 4680 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8b202a9-2dd7-4e9d-a072-c51433d3596f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424125 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8b202a9-2dd7-4e9d-a072-c51433d3596f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424144 4680 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8b202a9-2dd7-4e9d-a072-c51433d3596f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424237 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bc5951e-2f18-4454-9de8-03a295fe8e1a-systemd-units\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424553 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-env-overrides\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.424979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovnkube-config\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.427362 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bc5951e-2f18-4454-9de8-03a295fe8e1a-ovn-node-metrics-cert\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.453227 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xt87\" (UniqueName: \"kubernetes.io/projected/8bc5951e-2f18-4454-9de8-03a295fe8e1a-kube-api-access-5xt87\") pod \"ovnkube-node-zbpn8\" (UID: \"8bc5951e-2f18-4454-9de8-03a295fe8e1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.486991 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.915354 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovnkube-controller/3.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.918697 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovn-acl-logging/0.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.919591 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j2vl_f8b202a9-2dd7-4e9d-a072-c51433d3596f/ovn-controller/0.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920215 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" exitCode=0 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920268 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" exitCode=0 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920279 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" exitCode=0 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920289 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" exitCode=0 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920298 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" exitCode=0 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920306 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" exitCode=143 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920335 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920364 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920393 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920410 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920425 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920438 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920455 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920469 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920482 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920490 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920499 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920507 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920514 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920525 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920532 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920540 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920490 4680 scope.go:117] "RemoveContainer" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920552 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j2vl" event={"ID":"f8b202a9-2dd7-4e9d-a072-c51433d3596f","Type":"ContainerDied","Data":"ed596967c76a6dce0921c4f8ea9429ede481ad9e460e0cd7af85d9121a0d0efb"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920663 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920681 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920688 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920714 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920721 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920734 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920745 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920752 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920766 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.920773 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.922369 4680 generic.go:334] "Generic (PLEG): container finished" podID="8bc5951e-2f18-4454-9de8-03a295fe8e1a" containerID="f56840175ed2ae1610098c6121c607176aa5ddf586007babeb81e73501b4c083" exitCode=0 Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.922421 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerDied","Data":"f56840175ed2ae1610098c6121c607176aa5ddf586007babeb81e73501b4c083"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.922459 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"a4c3bf9a047baab799c079fbeccf18759b266a48c8c0d9e50605014db7beaa94"} Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.926791 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/2.log" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.978891 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.979127 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j2vl"] Jan 26 16:17:16 crc kubenswrapper[4680]: I0126 16:17:16.983337 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j2vl"] Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.009582 4680 scope.go:117] "RemoveContainer" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.043498 4680 scope.go:117] "RemoveContainer" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.061399 4680 scope.go:117] "RemoveContainer" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.085327 4680 scope.go:117] "RemoveContainer" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.100697 4680 scope.go:117] "RemoveContainer" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.124843 4680 scope.go:117] "RemoveContainer" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.144309 4680 scope.go:117] "RemoveContainer" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.178550 4680 scope.go:117] "RemoveContainer" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.183902 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b202a9-2dd7-4e9d-a072-c51433d3596f" path="/var/lib/kubelet/pods/f8b202a9-2dd7-4e9d-a072-c51433d3596f/volumes" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.205395 4680 scope.go:117] "RemoveContainer" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.205954 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": container with ID starting with f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a not found: ID does not exist" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.205996 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} err="failed to get container status \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": rpc error: code = NotFound desc = could not find container \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": container with ID starting with f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.206035 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.206603 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": container with ID starting with 2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0 not found: ID does not exist" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.206651 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} err="failed to get container status \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": rpc error: code = NotFound desc = could not find container \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": container with ID starting with 2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.206682 4680 scope.go:117] "RemoveContainer" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.207189 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": container with ID starting with a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c not found: ID does not exist" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.207234 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} err="failed to get container status \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": rpc error: code = NotFound desc = could not find container \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": container with ID starting with a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.207265 4680 scope.go:117] "RemoveContainer" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.207629 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": container with ID starting with a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d not found: ID does not exist" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.207661 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} err="failed to get container status \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": rpc error: code = NotFound desc = could not find container \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": container with ID starting with a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.207684 4680 scope.go:117] "RemoveContainer" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.207970 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": container with ID starting with 762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046 not found: ID does not exist" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.207996 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} err="failed to get container status \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": rpc error: code = NotFound desc = could not find container \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": container with ID starting with 762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.208012 4680 scope.go:117] "RemoveContainer" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.208409 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": container with ID starting with d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9 not found: ID does not exist" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.208433 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} err="failed to get container status \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": rpc error: code = NotFound desc = could not find container \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": container with ID starting with d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.208446 4680 scope.go:117] "RemoveContainer" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.208973 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": container with ID starting with c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae not found: ID does not exist" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.208997 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} err="failed to get container status \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": rpc error: code = NotFound desc = could not find container \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": container with ID starting with c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.209013 4680 scope.go:117] "RemoveContainer" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.209385 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": container with ID starting with 489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc not found: ID does not exist" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.209411 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} err="failed to get container status \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": rpc error: code = NotFound desc = could not find container \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": container with ID starting with 489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.209436 4680 scope.go:117] "RemoveContainer" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.209822 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": container with ID starting with bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426 not found: ID does not exist" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.209854 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} err="failed to get container status \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": rpc error: code = NotFound desc = could not find container \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": container with ID starting with bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.209873 4680 scope.go:117] "RemoveContainer" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" Jan 26 16:17:17 crc kubenswrapper[4680]: E0126 16:17:17.210193 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": container with ID starting with 4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd not found: ID does not exist" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.210219 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} err="failed to get container status \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": rpc error: code = NotFound desc = could not find container \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": container with ID starting with 4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.210236 4680 scope.go:117] "RemoveContainer" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.210555 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} err="failed to get container status \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": rpc error: code = NotFound desc = could not find container \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": container with ID starting with f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.210576 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.210818 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} err="failed to get container status \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": rpc error: code = NotFound desc = could not find container \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": container with ID starting with 2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.210852 4680 scope.go:117] "RemoveContainer" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211165 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} err="failed to get container status \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": rpc error: code = NotFound desc = could not find container \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": container with ID starting with a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211196 4680 scope.go:117] "RemoveContainer" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211447 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} err="failed to get container status \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": rpc error: code = NotFound desc = could not find container \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": container with ID starting with a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211469 4680 scope.go:117] "RemoveContainer" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211793 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} err="failed to get container status \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": rpc error: code = NotFound desc = could not find container \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": container with ID starting with 762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211808 4680 scope.go:117] "RemoveContainer" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211983 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} err="failed to get container status \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": rpc error: code = NotFound desc = could not find container \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": container with ID starting with d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.211995 4680 scope.go:117] "RemoveContainer" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.214177 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} err="failed to get container status \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": rpc error: code = NotFound desc = could not find container \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": container with ID starting with c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.214260 4680 scope.go:117] "RemoveContainer" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.214545 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} err="failed to get container status \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": rpc error: code = NotFound desc = could not find container \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": container with ID starting with 489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.214564 4680 scope.go:117] "RemoveContainer" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.214755 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} err="failed to get container status \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": rpc error: code = NotFound desc = could not find container \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": container with ID starting with bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.214770 4680 scope.go:117] "RemoveContainer" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215087 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} err="failed to get container status \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": rpc error: code = NotFound desc = could not find container \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": container with ID starting with 4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215105 4680 scope.go:117] "RemoveContainer" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215307 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} err="failed to get container status \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": rpc error: code = NotFound desc = could not find container \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": container with ID starting with f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215323 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215538 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} err="failed to get container status \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": rpc error: code = NotFound desc = could not find container \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": container with ID starting with 2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215550 4680 scope.go:117] "RemoveContainer" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215782 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} err="failed to get container status \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": rpc error: code = NotFound desc = could not find container \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": container with ID starting with a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.215819 4680 scope.go:117] "RemoveContainer" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216020 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} err="failed to get container status \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": rpc error: code = NotFound desc = could not find container \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": container with ID starting with a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216035 4680 scope.go:117] "RemoveContainer" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216235 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} err="failed to get container status \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": rpc error: code = NotFound desc = could not find container \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": container with ID starting with 762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216256 4680 scope.go:117] "RemoveContainer" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216477 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} err="failed to get container status \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": rpc error: code = NotFound desc = could not find container \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": container with ID starting with d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216561 4680 scope.go:117] "RemoveContainer" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.216852 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} err="failed to get container status \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": rpc error: code = NotFound desc = could not find container \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": container with ID starting with c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.217131 4680 scope.go:117] "RemoveContainer" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.217484 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} err="failed to get container status \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": rpc error: code = NotFound desc = could not find container \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": container with ID starting with 489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.217630 4680 scope.go:117] "RemoveContainer" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218009 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} err="failed to get container status \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": rpc error: code = NotFound desc = could not find container \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": container with ID starting with bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218111 4680 scope.go:117] "RemoveContainer" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218402 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} err="failed to get container status \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": rpc error: code = NotFound desc = could not find container \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": container with ID starting with 4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218419 4680 scope.go:117] "RemoveContainer" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218640 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} err="failed to get container status \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": rpc error: code = NotFound desc = could not find container \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": container with ID starting with f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218654 4680 scope.go:117] "RemoveContainer" containerID="2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218854 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0"} err="failed to get container status \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": rpc error: code = NotFound desc = could not find container \"2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0\": container with ID starting with 2908ddea5ff6d8ab3bb84db84b497fd43d4d3cd428b5f4cc22041793d3666ac0 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.218870 4680 scope.go:117] "RemoveContainer" containerID="a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219183 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c"} err="failed to get container status \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": rpc error: code = NotFound desc = could not find container \"a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c\": container with ID starting with a53f2a37d8a8a3fe0c40f476e6df7c2e63601440d3a60bf1f0856220f84c9b5c not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219199 4680 scope.go:117] "RemoveContainer" containerID="a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219405 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d"} err="failed to get container status \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": rpc error: code = NotFound desc = could not find container \"a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d\": container with ID starting with a1d4a6cdc72d20c1159a38223b14d249a813f1d503fcbb0d8ba6242975ec6b8d not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219421 4680 scope.go:117] "RemoveContainer" containerID="762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219610 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046"} err="failed to get container status \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": rpc error: code = NotFound desc = could not find container \"762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046\": container with ID starting with 762099fc34073ccadf5a39cdf16657ab3e9cd15b46bd0814022ce7d23d562046 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219625 4680 scope.go:117] "RemoveContainer" containerID="d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219830 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9"} err="failed to get container status \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": rpc error: code = NotFound desc = could not find container \"d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9\": container with ID starting with d75abad9984467183872e20c1031e6c95694424108b2c8f133d2569a507bf2e9 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.219845 4680 scope.go:117] "RemoveContainer" containerID="c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220108 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae"} err="failed to get container status \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": rpc error: code = NotFound desc = could not find container \"c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae\": container with ID starting with c3554604e25b9d034b679c0242a737db0eb5a9091f544fb06564d29e0de0ceae not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220124 4680 scope.go:117] "RemoveContainer" containerID="489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220327 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc"} err="failed to get container status \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": rpc error: code = NotFound desc = could not find container \"489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc\": container with ID starting with 489a2b222a26435a05958bfc438ad7f164dc978fb10cd2ce7565709851cc16fc not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220341 4680 scope.go:117] "RemoveContainer" containerID="bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220612 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426"} err="failed to get container status \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": rpc error: code = NotFound desc = could not find container \"bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426\": container with ID starting with bf0b75c9dc6fc1d060c93209c1be9d24f71852d2a0431545c8f6a0809ff9d426 not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220626 4680 scope.go:117] "RemoveContainer" containerID="4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220828 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd"} err="failed to get container status \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": rpc error: code = NotFound desc = could not find container \"4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd\": container with ID starting with 4a6714c201efa412cb7110def3d19827a52f910d694cd0ddc0320eef4019cdcd not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.220841 4680 scope.go:117] "RemoveContainer" containerID="f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.221099 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a"} err="failed to get container status \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": rpc error: code = NotFound desc = could not find container \"f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a\": container with ID starting with f22e85a81bd5531e0186130fb9bf46c0b6589a10129992eb86e02f0b63bdba7a not found: ID does not exist" Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.940311 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"ee5fc61650fe7c7d32713c07696caf7af6fd94e8ad24daaa212b8d0eb17cc055"} Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.940696 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"e251deeee91810053e9be47aef2653d59eebf51e57bf9ab7d480a7efa644e4ad"} Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.940712 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"26aeefd8ee73e62f3eec4aca3fa7eede4bff323af59a210b369269c8f3517e5c"} Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.940723 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"e9847a725b471a1a1fbde990bfb05faeff6fb656c2cf4a6e26eb8e098110be57"} Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.940734 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"0b2f61a625747ed31478d66f17445fe0be80e1d28b913ffc6294cd1310f8ded9"} Jan 26 16:17:17 crc kubenswrapper[4680]: I0126 16:17:17.940747 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"5b6eba0d638747e974c7789973e0bc81274c4bcf5dbcea5a6f57a3b516fec8d7"} Jan 26 16:17:20 crc kubenswrapper[4680]: I0126 16:17:20.984586 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"f1f30d4e9e96a056486337dac31580add82e8aefd1ec08dd3befc2becd9a9090"} Jan 26 16:17:22 crc kubenswrapper[4680]: I0126 16:17:22.998995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" event={"ID":"8bc5951e-2f18-4454-9de8-03a295fe8e1a","Type":"ContainerStarted","Data":"85d2024bd61f4263226404120900a71d4124a7d109063b1c395cf29214bf089f"} Jan 26 16:17:22 crc kubenswrapper[4680]: I0126 16:17:22.999263 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:23 crc kubenswrapper[4680]: I0126 16:17:23.028680 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" podStartSLOduration=7.028665109 podStartE2EDuration="7.028665109s" podCreationTimestamp="2026-01-26 16:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:17:23.024896861 +0000 UTC m=+718.186169150" watchObservedRunningTime="2026-01-26 16:17:23.028665109 +0000 UTC m=+718.189937378" Jan 26 16:17:23 crc kubenswrapper[4680]: I0126 16:17:23.035559 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:24 crc kubenswrapper[4680]: I0126 16:17:24.005436 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:24 crc kubenswrapper[4680]: I0126 16:17:24.005933 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:24 crc kubenswrapper[4680]: I0126 16:17:24.048990 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:31 crc kubenswrapper[4680]: I0126 16:17:31.169734 4680 scope.go:117] "RemoveContainer" containerID="5565421e31d49f8991f452086b11b6115325b4ee38798808abf5c24b9ff73504" Jan 26 16:17:31 crc kubenswrapper[4680]: E0126 16:17:31.170942 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqgn2_openshift-multus(9ac04312-7b74-4193-9b93-b54b91bab69b)\"" pod="openshift-multus/multus-lqgn2" podUID="9ac04312-7b74-4193-9b93-b54b91bab69b" Jan 26 16:17:46 crc kubenswrapper[4680]: I0126 16:17:46.169431 4680 scope.go:117] "RemoveContainer" containerID="5565421e31d49f8991f452086b11b6115325b4ee38798808abf5c24b9ff73504" Jan 26 16:17:46 crc kubenswrapper[4680]: I0126 16:17:46.505923 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" Jan 26 16:17:47 crc kubenswrapper[4680]: I0126 16:17:47.161060 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqgn2_9ac04312-7b74-4193-9b93-b54b91bab69b/kube-multus/2.log" Jan 26 16:17:47 crc kubenswrapper[4680]: I0126 16:17:47.161467 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqgn2" event={"ID":"9ac04312-7b74-4193-9b93-b54b91bab69b","Type":"ContainerStarted","Data":"ae122d270221685305f405fce4e3639f92e44c45bdbadf3c3917fc635cdc81c5"} Jan 26 16:17:58 crc kubenswrapper[4680]: I0126 16:17:58.412546 4680 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.107608 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd"] Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.108636 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.110791 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.118818 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd"] Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.220046 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gh9\" (UniqueName: \"kubernetes.io/projected/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-kube-api-access-l2gh9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.220287 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.220347 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.321858 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gh9\" (UniqueName: \"kubernetes.io/projected/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-kube-api-access-l2gh9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.321939 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.321962 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.322886 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.323131 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.345207 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gh9\" (UniqueName: \"kubernetes.io/projected/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-kube-api-access-l2gh9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.446853 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:17:59 crc kubenswrapper[4680]: I0126 16:17:59.658006 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd"] Jan 26 16:18:00 crc kubenswrapper[4680]: I0126 16:18:00.231673 4680 generic.go:334] "Generic (PLEG): container finished" podID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerID="2a6257fba829fa47d57b1b554a11e6cac007b10fa96b2b24a9aa66745ff5bb97" exitCode=0 Jan 26 16:18:00 crc kubenswrapper[4680]: I0126 16:18:00.231764 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" event={"ID":"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a","Type":"ContainerDied","Data":"2a6257fba829fa47d57b1b554a11e6cac007b10fa96b2b24a9aa66745ff5bb97"} Jan 26 16:18:00 crc kubenswrapper[4680]: I0126 16:18:00.233140 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" event={"ID":"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a","Type":"ContainerStarted","Data":"a9f069e99bda75b6713abb6f7ad9d9d10b37bcdd65999a2f740d49635038df0d"} Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.370162 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wzsgq"] Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.371906 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.388794 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzsgq"] Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.556903 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-catalog-content\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.557209 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlwk\" (UniqueName: \"kubernetes.io/projected/14f5c261-7cdf-42c6-8e2a-26835b4c333c-kube-api-access-8qlwk\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.557240 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-utilities\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.658718 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-catalog-content\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.658973 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlwk\" (UniqueName: \"kubernetes.io/projected/14f5c261-7cdf-42c6-8e2a-26835b4c333c-kube-api-access-8qlwk\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.659097 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-utilities\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.659161 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-catalog-content\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.659626 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-utilities\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.681398 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlwk\" (UniqueName: \"kubernetes.io/projected/14f5c261-7cdf-42c6-8e2a-26835b4c333c-kube-api-access-8qlwk\") pod \"redhat-operators-wzsgq\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.688833 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:01 crc kubenswrapper[4680]: I0126 16:18:01.872168 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzsgq"] Jan 26 16:18:02 crc kubenswrapper[4680]: I0126 16:18:02.244730 4680 generic.go:334] "Generic (PLEG): container finished" podID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerID="4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a" exitCode=0 Jan 26 16:18:02 crc kubenswrapper[4680]: I0126 16:18:02.244924 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerDied","Data":"4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a"} Jan 26 16:18:02 crc kubenswrapper[4680]: I0126 16:18:02.245002 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerStarted","Data":"9706cd71ccb86639bbfa6e9c8f461e48fc7f02d1f8b89f81eaefdaee3b5ec623"} Jan 26 16:18:03 crc kubenswrapper[4680]: I0126 16:18:03.251374 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerStarted","Data":"1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84"} Jan 26 16:18:03 crc kubenswrapper[4680]: I0126 16:18:03.252803 4680 generic.go:334] "Generic (PLEG): container finished" podID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerID="f44f93cab7e57350975a9232a681218caa15a97ddd1a292b0a0339785ea3456d" exitCode=0 Jan 26 16:18:03 crc kubenswrapper[4680]: I0126 16:18:03.252834 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" event={"ID":"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a","Type":"ContainerDied","Data":"f44f93cab7e57350975a9232a681218caa15a97ddd1a292b0a0339785ea3456d"} Jan 26 16:18:04 crc kubenswrapper[4680]: I0126 16:18:04.259263 4680 generic.go:334] "Generic (PLEG): container finished" podID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerID="59f26b6564c515438a37baa9215a20d3bccc210b59c71e06bdc5167fca88e7c0" exitCode=0 Jan 26 16:18:04 crc kubenswrapper[4680]: I0126 16:18:04.259335 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" event={"ID":"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a","Type":"ContainerDied","Data":"59f26b6564c515438a37baa9215a20d3bccc210b59c71e06bdc5167fca88e7c0"} Jan 26 16:18:04 crc kubenswrapper[4680]: I0126 16:18:04.261230 4680 generic.go:334] "Generic (PLEG): container finished" podID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerID="1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84" exitCode=0 Jan 26 16:18:04 crc kubenswrapper[4680]: I0126 16:18:04.261265 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerDied","Data":"1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84"} Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.267622 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerStarted","Data":"80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000"} Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.285610 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wzsgq" podStartSLOduration=1.826782385 podStartE2EDuration="4.285592387s" podCreationTimestamp="2026-01-26 16:18:01 +0000 UTC" firstStartedPulling="2026-01-26 16:18:02.246356278 +0000 UTC m=+757.407628547" lastFinishedPulling="2026-01-26 16:18:04.70516628 +0000 UTC m=+759.866438549" observedRunningTime="2026-01-26 16:18:05.284555987 +0000 UTC m=+760.445828276" watchObservedRunningTime="2026-01-26 16:18:05.285592387 +0000 UTC m=+760.446864656" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.484475 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.518617 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2gh9\" (UniqueName: \"kubernetes.io/projected/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-kube-api-access-l2gh9\") pod \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.518789 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-util\") pod \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.518886 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-bundle\") pod \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\" (UID: \"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a\") " Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.520159 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-bundle" (OuterVolumeSpecName: "bundle") pod "5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" (UID: "5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.529270 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-kube-api-access-l2gh9" (OuterVolumeSpecName: "kube-api-access-l2gh9") pod "5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" (UID: "5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a"). InnerVolumeSpecName "kube-api-access-l2gh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.534945 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-util" (OuterVolumeSpecName: "util") pod "5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" (UID: "5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.623309 4680 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.623374 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2gh9\" (UniqueName: \"kubernetes.io/projected/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-kube-api-access-l2gh9\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:05 crc kubenswrapper[4680]: I0126 16:18:05.623389 4680 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a-util\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:06 crc kubenswrapper[4680]: I0126 16:18:06.275063 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" event={"ID":"5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a","Type":"ContainerDied","Data":"a9f069e99bda75b6713abb6f7ad9d9d10b37bcdd65999a2f740d49635038df0d"} Jan 26 16:18:06 crc kubenswrapper[4680]: I0126 16:18:06.275919 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f069e99bda75b6713abb6f7ad9d9d10b37bcdd65999a2f740d49635038df0d" Jan 26 16:18:06 crc kubenswrapper[4680]: I0126 16:18:06.275119 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713q8zvd" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.737512 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g65dt"] Jan 26 16:18:08 crc kubenswrapper[4680]: E0126 16:18:08.738564 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="pull" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.738630 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="pull" Jan 26 16:18:08 crc kubenswrapper[4680]: E0126 16:18:08.738683 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="extract" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.738731 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="extract" Jan 26 16:18:08 crc kubenswrapper[4680]: E0126 16:18:08.738809 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="util" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.738867 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="util" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.739028 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2bd3bb-1ec8-4746-b8d4-de6dc30dfb0a" containerName="extract" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.739427 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.743601 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.745551 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cdkdt" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.746750 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.751809 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g65dt"] Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.761494 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtgq\" (UniqueName: \"kubernetes.io/projected/982109d4-dfb6-48e2-9014-712d6f7bd882-kube-api-access-dbtgq\") pod \"nmstate-operator-646758c888-g65dt\" (UID: \"982109d4-dfb6-48e2-9014-712d6f7bd882\") " pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.862925 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtgq\" (UniqueName: \"kubernetes.io/projected/982109d4-dfb6-48e2-9014-712d6f7bd882-kube-api-access-dbtgq\") pod \"nmstate-operator-646758c888-g65dt\" (UID: \"982109d4-dfb6-48e2-9014-712d6f7bd882\") " pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" Jan 26 16:18:08 crc kubenswrapper[4680]: I0126 16:18:08.883110 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtgq\" (UniqueName: \"kubernetes.io/projected/982109d4-dfb6-48e2-9014-712d6f7bd882-kube-api-access-dbtgq\") pod \"nmstate-operator-646758c888-g65dt\" (UID: \"982109d4-dfb6-48e2-9014-712d6f7bd882\") " pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" Jan 26 16:18:09 crc kubenswrapper[4680]: I0126 16:18:09.056713 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" Jan 26 16:18:09 crc kubenswrapper[4680]: I0126 16:18:09.280085 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-g65dt"] Jan 26 16:18:09 crc kubenswrapper[4680]: W0126 16:18:09.291257 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod982109d4_dfb6_48e2_9014_712d6f7bd882.slice/crio-db3cd97b23be0872f1682aecff97171ed642c6f948964bf8b966bcce5bc4bf9c WatchSource:0}: Error finding container db3cd97b23be0872f1682aecff97171ed642c6f948964bf8b966bcce5bc4bf9c: Status 404 returned error can't find the container with id db3cd97b23be0872f1682aecff97171ed642c6f948964bf8b966bcce5bc4bf9c Jan 26 16:18:10 crc kubenswrapper[4680]: I0126 16:18:10.307699 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" event={"ID":"982109d4-dfb6-48e2-9014-712d6f7bd882","Type":"ContainerStarted","Data":"db3cd97b23be0872f1682aecff97171ed642c6f948964bf8b966bcce5bc4bf9c"} Jan 26 16:18:11 crc kubenswrapper[4680]: I0126 16:18:11.689161 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:11 crc kubenswrapper[4680]: I0126 16:18:11.689572 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:11 crc kubenswrapper[4680]: I0126 16:18:11.732879 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:12 crc kubenswrapper[4680]: I0126 16:18:12.322403 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" event={"ID":"982109d4-dfb6-48e2-9014-712d6f7bd882","Type":"ContainerStarted","Data":"9445a6a6bfd592944a77dadb9e8a4e0219340b04076b30afbbcce53d43dd61c6"} Jan 26 16:18:12 crc kubenswrapper[4680]: I0126 16:18:12.365041 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:12 crc kubenswrapper[4680]: I0126 16:18:12.393416 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-g65dt" podStartSLOduration=1.672601032 podStartE2EDuration="4.393398992s" podCreationTimestamp="2026-01-26 16:18:08 +0000 UTC" firstStartedPulling="2026-01-26 16:18:09.300120955 +0000 UTC m=+764.461393224" lastFinishedPulling="2026-01-26 16:18:12.020918915 +0000 UTC m=+767.182191184" observedRunningTime="2026-01-26 16:18:12.338279482 +0000 UTC m=+767.499551751" watchObservedRunningTime="2026-01-26 16:18:12.393398992 +0000 UTC m=+767.554671271" Jan 26 16:18:13 crc kubenswrapper[4680]: I0126 16:18:13.555703 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzsgq"] Jan 26 16:18:14 crc kubenswrapper[4680]: I0126 16:18:14.334501 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wzsgq" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="registry-server" containerID="cri-o://80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000" gracePeriod=2 Jan 26 16:18:15 crc kubenswrapper[4680]: I0126 16:18:15.990996 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.052120 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-utilities\") pod \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.052172 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qlwk\" (UniqueName: \"kubernetes.io/projected/14f5c261-7cdf-42c6-8e2a-26835b4c333c-kube-api-access-8qlwk\") pod \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.052208 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-catalog-content\") pod \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\" (UID: \"14f5c261-7cdf-42c6-8e2a-26835b4c333c\") " Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.052939 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-utilities" (OuterVolumeSpecName: "utilities") pod "14f5c261-7cdf-42c6-8e2a-26835b4c333c" (UID: "14f5c261-7cdf-42c6-8e2a-26835b4c333c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.058190 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f5c261-7cdf-42c6-8e2a-26835b4c333c-kube-api-access-8qlwk" (OuterVolumeSpecName: "kube-api-access-8qlwk") pod "14f5c261-7cdf-42c6-8e2a-26835b4c333c" (UID: "14f5c261-7cdf-42c6-8e2a-26835b4c333c"). InnerVolumeSpecName "kube-api-access-8qlwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.153521 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.153557 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qlwk\" (UniqueName: \"kubernetes.io/projected/14f5c261-7cdf-42c6-8e2a-26835b4c333c-kube-api-access-8qlwk\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.156384 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14f5c261-7cdf-42c6-8e2a-26835b4c333c" (UID: "14f5c261-7cdf-42c6-8e2a-26835b4c333c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.255215 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f5c261-7cdf-42c6-8e2a-26835b4c333c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.346851 4680 generic.go:334] "Generic (PLEG): container finished" podID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerID="80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000" exitCode=0 Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.346935 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerDied","Data":"80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000"} Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.346943 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsgq" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.346973 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsgq" event={"ID":"14f5c261-7cdf-42c6-8e2a-26835b4c333c","Type":"ContainerDied","Data":"9706cd71ccb86639bbfa6e9c8f461e48fc7f02d1f8b89f81eaefdaee3b5ec623"} Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.346992 4680 scope.go:117] "RemoveContainer" containerID="80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.362261 4680 scope.go:117] "RemoveContainer" containerID="1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.383456 4680 scope.go:117] "RemoveContainer" containerID="4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.390479 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzsgq"] Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.393949 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wzsgq"] Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.397967 4680 scope.go:117] "RemoveContainer" containerID="80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000" Jan 26 16:18:16 crc kubenswrapper[4680]: E0126 16:18:16.398388 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000\": container with ID starting with 80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000 not found: ID does not exist" containerID="80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.398415 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000"} err="failed to get container status \"80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000\": rpc error: code = NotFound desc = could not find container \"80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000\": container with ID starting with 80d9338a615f3a13c973e69d96d78436ffb2ea7a0f3f79975c27197d2173e000 not found: ID does not exist" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.398437 4680 scope.go:117] "RemoveContainer" containerID="1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84" Jan 26 16:18:16 crc kubenswrapper[4680]: E0126 16:18:16.399087 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84\": container with ID starting with 1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84 not found: ID does not exist" containerID="1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.399132 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84"} err="failed to get container status \"1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84\": rpc error: code = NotFound desc = could not find container \"1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84\": container with ID starting with 1d98c952f0c0378ab8d87d99acbde7ab1c4fe6848c31971a1bf3593523baec84 not found: ID does not exist" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.399158 4680 scope.go:117] "RemoveContainer" containerID="4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a" Jan 26 16:18:16 crc kubenswrapper[4680]: E0126 16:18:16.399461 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a\": container with ID starting with 4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a not found: ID does not exist" containerID="4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.399486 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a"} err="failed to get container status \"4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a\": rpc error: code = NotFound desc = could not find container \"4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a\": container with ID starting with 4701e668f08d9572c0a16b486d2db52a61b1c8765b59401aec5693a5fa0f750a not found: ID does not exist" Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.982485 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:18:16 crc kubenswrapper[4680]: I0126 16:18:16.982574 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:18:17 crc kubenswrapper[4680]: I0126 16:18:17.181648 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" path="/var/lib/kubelet/pods/14f5c261-7cdf-42c6-8e2a-26835b4c333c/volumes" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.396891 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zfqmx"] Jan 26 16:18:19 crc kubenswrapper[4680]: E0126 16:18:19.397493 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="extract-content" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.397511 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="extract-content" Jan 26 16:18:19 crc kubenswrapper[4680]: E0126 16:18:19.397523 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="extract-utilities" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.397531 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="extract-utilities" Jan 26 16:18:19 crc kubenswrapper[4680]: E0126 16:18:19.397552 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="registry-server" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.397559 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="registry-server" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.397678 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f5c261-7cdf-42c6-8e2a-26835b4c333c" containerName="registry-server" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.398369 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.401517 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-brq88" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.435852 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.436710 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.443313 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.448163 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zfqmx"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.455999 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.479967 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-nqg55"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.481520 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.583755 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.584536 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.589059 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.591368 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4rsrg" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592186 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-ovs-socket\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592235 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxl5\" (UniqueName: \"kubernetes.io/projected/c98c7bbd-7d35-463f-a9c2-f959a1368ff8-kube-api-access-pfxl5\") pod \"nmstate-metrics-54757c584b-zfqmx\" (UID: \"c98c7bbd-7d35-463f-a9c2-f959a1368ff8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592280 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-nmstate-lock\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592307 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vd6d\" (UniqueName: \"kubernetes.io/projected/11ded85a-b350-41a1-b9f2-f57901f116c5-kube-api-access-5vd6d\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592339 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89cb99e5-a352-468f-bcc6-a90442f0bd6b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xr5vv\" (UID: \"89cb99e5-a352-468f-bcc6-a90442f0bd6b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592402 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldvh2\" (UniqueName: \"kubernetes.io/projected/89cb99e5-a352-468f-bcc6-a90442f0bd6b-kube-api-access-ldvh2\") pod \"nmstate-webhook-8474b5b9d8-xr5vv\" (UID: \"89cb99e5-a352-468f-bcc6-a90442f0bd6b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.592434 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-dbus-socket\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.601879 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.636112 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693142 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/82985aa2-2406-4567-91db-985ef35c5106-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693198 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/82985aa2-2406-4567-91db-985ef35c5106-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693244 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-ovs-socket\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxl5\" (UniqueName: \"kubernetes.io/projected/c98c7bbd-7d35-463f-a9c2-f959a1368ff8-kube-api-access-pfxl5\") pod \"nmstate-metrics-54757c584b-zfqmx\" (UID: \"c98c7bbd-7d35-463f-a9c2-f959a1368ff8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693317 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-nmstate-lock\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693353 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-ovs-socket\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693369 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vd6d\" (UniqueName: \"kubernetes.io/projected/11ded85a-b350-41a1-b9f2-f57901f116c5-kube-api-access-5vd6d\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693389 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-nmstate-lock\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89cb99e5-a352-468f-bcc6-a90442f0bd6b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xr5vv\" (UID: \"89cb99e5-a352-468f-bcc6-a90442f0bd6b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldvh2\" (UniqueName: \"kubernetes.io/projected/89cb99e5-a352-468f-bcc6-a90442f0bd6b-kube-api-access-ldvh2\") pod \"nmstate-webhook-8474b5b9d8-xr5vv\" (UID: \"89cb99e5-a352-468f-bcc6-a90442f0bd6b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693648 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqhf\" (UniqueName: \"kubernetes.io/projected/82985aa2-2406-4567-91db-985ef35c5106-kube-api-access-pjqhf\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.693675 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-dbus-socket\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.694030 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11ded85a-b350-41a1-b9f2-f57901f116c5-dbus-socket\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.706111 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89cb99e5-a352-468f-bcc6-a90442f0bd6b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xr5vv\" (UID: \"89cb99e5-a352-468f-bcc6-a90442f0bd6b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.712126 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vd6d\" (UniqueName: \"kubernetes.io/projected/11ded85a-b350-41a1-b9f2-f57901f116c5-kube-api-access-5vd6d\") pod \"nmstate-handler-nqg55\" (UID: \"11ded85a-b350-41a1-b9f2-f57901f116c5\") " pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.717291 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxl5\" (UniqueName: \"kubernetes.io/projected/c98c7bbd-7d35-463f-a9c2-f959a1368ff8-kube-api-access-pfxl5\") pod \"nmstate-metrics-54757c584b-zfqmx\" (UID: \"c98c7bbd-7d35-463f-a9c2-f959a1368ff8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.719460 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldvh2\" (UniqueName: \"kubernetes.io/projected/89cb99e5-a352-468f-bcc6-a90442f0bd6b-kube-api-access-ldvh2\") pod \"nmstate-webhook-8474b5b9d8-xr5vv\" (UID: \"89cb99e5-a352-468f-bcc6-a90442f0bd6b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.767696 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.778615 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fc5c8f49-48gmj"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.779464 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.794574 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjqhf\" (UniqueName: \"kubernetes.io/projected/82985aa2-2406-4567-91db-985ef35c5106-kube-api-access-pjqhf\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.794646 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/82985aa2-2406-4567-91db-985ef35c5106-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.794679 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/82985aa2-2406-4567-91db-985ef35c5106-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: E0126 16:18:19.794861 4680 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 16:18:19 crc kubenswrapper[4680]: E0126 16:18:19.794912 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82985aa2-2406-4567-91db-985ef35c5106-plugin-serving-cert podName:82985aa2-2406-4567-91db-985ef35c5106 nodeName:}" failed. No retries permitted until 2026-01-26 16:18:20.294896482 +0000 UTC m=+775.456168751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/82985aa2-2406-4567-91db-985ef35c5106-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-7wp4d" (UID: "82985aa2-2406-4567-91db-985ef35c5106") : secret "plugin-serving-cert" not found Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.795625 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/82985aa2-2406-4567-91db-985ef35c5106-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.796980 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.808732 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc5c8f49-48gmj"] Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.824930 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjqhf\" (UniqueName: \"kubernetes.io/projected/82985aa2-2406-4567-91db-985ef35c5106-kube-api-access-pjqhf\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:19 crc kubenswrapper[4680]: W0126 16:18:19.837031 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11ded85a_b350_41a1_b9f2_f57901f116c5.slice/crio-768f21a868ae036a04e6e538cc5454ad47572be90704221b6b4f46bba87a1626 WatchSource:0}: Error finding container 768f21a868ae036a04e6e538cc5454ad47572be90704221b6b4f46bba87a1626: Status 404 returned error can't find the container with id 768f21a868ae036a04e6e538cc5454ad47572be90704221b6b4f46bba87a1626 Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.896122 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-oauth-serving-cert\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.896206 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-config\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.896260 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chfp\" (UniqueName: \"kubernetes.io/projected/b07144ca-cc49-4f4f-9620-88ebbdffce43-kube-api-access-5chfp\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.896284 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-serving-cert\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.896304 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-oauth-config\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.897012 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-trusted-ca-bundle\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.897054 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-service-ca\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998342 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-oauth-serving-cert\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998632 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-config\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998666 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chfp\" (UniqueName: \"kubernetes.io/projected/b07144ca-cc49-4f4f-9620-88ebbdffce43-kube-api-access-5chfp\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998684 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-serving-cert\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998709 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-oauth-config\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-trusted-ca-bundle\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.998751 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-service-ca\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.999164 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-oauth-serving-cert\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:19 crc kubenswrapper[4680]: I0126 16:18:19.999321 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-service-ca\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.001302 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-trusted-ca-bundle\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.004590 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-config\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.004802 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-serving-cert\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.013054 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.014463 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b07144ca-cc49-4f4f-9620-88ebbdffce43-console-oauth-config\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.019447 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5chfp\" (UniqueName: \"kubernetes.io/projected/b07144ca-cc49-4f4f-9620-88ebbdffce43-kube-api-access-5chfp\") pod \"console-6fc5c8f49-48gmj\" (UID: \"b07144ca-cc49-4f4f-9620-88ebbdffce43\") " pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.164502 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.287655 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv"] Jan 26 16:18:20 crc kubenswrapper[4680]: W0126 16:18:20.295121 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89cb99e5_a352_468f_bcc6_a90442f0bd6b.slice/crio-4ce3b3826f2109cdb1756ba296526916edd8ab56e9c03258b602492f201d00e0 WatchSource:0}: Error finding container 4ce3b3826f2109cdb1756ba296526916edd8ab56e9c03258b602492f201d00e0: Status 404 returned error can't find the container with id 4ce3b3826f2109cdb1756ba296526916edd8ab56e9c03258b602492f201d00e0 Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.302417 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/82985aa2-2406-4567-91db-985ef35c5106-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.305518 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/82985aa2-2406-4567-91db-985ef35c5106-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7wp4d\" (UID: \"82985aa2-2406-4567-91db-985ef35c5106\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.347776 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc5c8f49-48gmj"] Jan 26 16:18:20 crc kubenswrapper[4680]: W0126 16:18:20.351182 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb07144ca_cc49_4f4f_9620_88ebbdffce43.slice/crio-898d51c681a28b3ab9fbddc44574d29e1b67e08ad9bb42c2d823089047592b2d WatchSource:0}: Error finding container 898d51c681a28b3ab9fbddc44574d29e1b67e08ad9bb42c2d823089047592b2d: Status 404 returned error can't find the container with id 898d51c681a28b3ab9fbddc44574d29e1b67e08ad9bb42c2d823089047592b2d Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.367841 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" event={"ID":"89cb99e5-a352-468f-bcc6-a90442f0bd6b","Type":"ContainerStarted","Data":"4ce3b3826f2109cdb1756ba296526916edd8ab56e9c03258b602492f201d00e0"} Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.368977 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc5c8f49-48gmj" event={"ID":"b07144ca-cc49-4f4f-9620-88ebbdffce43","Type":"ContainerStarted","Data":"898d51c681a28b3ab9fbddc44574d29e1b67e08ad9bb42c2d823089047592b2d"} Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.370624 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nqg55" event={"ID":"11ded85a-b350-41a1-b9f2-f57901f116c5","Type":"ContainerStarted","Data":"768f21a868ae036a04e6e538cc5454ad47572be90704221b6b4f46bba87a1626"} Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.397384 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zfqmx"] Jan 26 16:18:20 crc kubenswrapper[4680]: W0126 16:18:20.405527 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc98c7bbd_7d35_463f_a9c2_f959a1368ff8.slice/crio-d1bb4475aa71a2f1ad6f5cfe92c5579ce58c51582a0387fca3ec440fa4fc654c WatchSource:0}: Error finding container d1bb4475aa71a2f1ad6f5cfe92c5579ce58c51582a0387fca3ec440fa4fc654c: Status 404 returned error can't find the container with id d1bb4475aa71a2f1ad6f5cfe92c5579ce58c51582a0387fca3ec440fa4fc654c Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.498853 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" Jan 26 16:18:20 crc kubenswrapper[4680]: I0126 16:18:20.677465 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d"] Jan 26 16:18:20 crc kubenswrapper[4680]: W0126 16:18:20.684405 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82985aa2_2406_4567_91db_985ef35c5106.slice/crio-0924ea03e4eca2d93633849932851ae498a3a358ab195c1b9080ffc5797de6e0 WatchSource:0}: Error finding container 0924ea03e4eca2d93633849932851ae498a3a358ab195c1b9080ffc5797de6e0: Status 404 returned error can't find the container with id 0924ea03e4eca2d93633849932851ae498a3a358ab195c1b9080ffc5797de6e0 Jan 26 16:18:21 crc kubenswrapper[4680]: I0126 16:18:21.377912 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc5c8f49-48gmj" event={"ID":"b07144ca-cc49-4f4f-9620-88ebbdffce43","Type":"ContainerStarted","Data":"3f56a4250d46b175ff9e4a36929d6097c39755ace625c2236926b8d90af3149e"} Jan 26 16:18:21 crc kubenswrapper[4680]: I0126 16:18:21.379499 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" event={"ID":"82985aa2-2406-4567-91db-985ef35c5106","Type":"ContainerStarted","Data":"0924ea03e4eca2d93633849932851ae498a3a358ab195c1b9080ffc5797de6e0"} Jan 26 16:18:21 crc kubenswrapper[4680]: I0126 16:18:21.380542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" event={"ID":"c98c7bbd-7d35-463f-a9c2-f959a1368ff8","Type":"ContainerStarted","Data":"d1bb4475aa71a2f1ad6f5cfe92c5579ce58c51582a0387fca3ec440fa4fc654c"} Jan 26 16:18:21 crc kubenswrapper[4680]: I0126 16:18:21.397722 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fc5c8f49-48gmj" podStartSLOduration=2.397700426 podStartE2EDuration="2.397700426s" podCreationTimestamp="2026-01-26 16:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:18:21.394252806 +0000 UTC m=+776.555525075" watchObservedRunningTime="2026-01-26 16:18:21.397700426 +0000 UTC m=+776.558972705" Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.399266 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" event={"ID":"82985aa2-2406-4567-91db-985ef35c5106","Type":"ContainerStarted","Data":"b40d69e167fb837a812ba838cdff2e5e93e2b4d5a37b291ae9d7b2edd9248760"} Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.401889 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" event={"ID":"c98c7bbd-7d35-463f-a9c2-f959a1368ff8","Type":"ContainerStarted","Data":"16dd9244f7537fd0f8cebfb85ef255bf2c2292e7d5c6723984c35f7156cacdaa"} Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.403483 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nqg55" event={"ID":"11ded85a-b350-41a1-b9f2-f57901f116c5","Type":"ContainerStarted","Data":"3be91e21546a4055c6b17838cf9036d8830b3fd617e79927de25aab7d0dbfc5a"} Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.403772 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.405605 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" event={"ID":"89cb99e5-a352-468f-bcc6-a90442f0bd6b","Type":"ContainerStarted","Data":"2785545effbf6b5a56df62a9501325ee9f8f53ce32cfcfecc77a4abcfa85dbb2"} Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.405734 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.419704 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7wp4d" podStartSLOduration=2.286207918 podStartE2EDuration="5.419689067s" podCreationTimestamp="2026-01-26 16:18:19 +0000 UTC" firstStartedPulling="2026-01-26 16:18:20.687229047 +0000 UTC m=+775.848501306" lastFinishedPulling="2026-01-26 16:18:23.820710126 +0000 UTC m=+778.981982455" observedRunningTime="2026-01-26 16:18:24.414233399 +0000 UTC m=+779.575505658" watchObservedRunningTime="2026-01-26 16:18:24.419689067 +0000 UTC m=+779.580961356" Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.447985 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" podStartSLOduration=1.922198937 podStartE2EDuration="5.447971793s" podCreationTimestamp="2026-01-26 16:18:19 +0000 UTC" firstStartedPulling="2026-01-26 16:18:20.297552155 +0000 UTC m=+775.458824424" lastFinishedPulling="2026-01-26 16:18:23.823325011 +0000 UTC m=+778.984597280" observedRunningTime="2026-01-26 16:18:24.445658776 +0000 UTC m=+779.606931045" watchObservedRunningTime="2026-01-26 16:18:24.447971793 +0000 UTC m=+779.609244062" Jan 26 16:18:24 crc kubenswrapper[4680]: I0126 16:18:24.462696 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-nqg55" podStartSLOduration=1.451935228 podStartE2EDuration="5.462676867s" podCreationTimestamp="2026-01-26 16:18:19 +0000 UTC" firstStartedPulling="2026-01-26 16:18:19.855686736 +0000 UTC m=+775.016959005" lastFinishedPulling="2026-01-26 16:18:23.866428375 +0000 UTC m=+779.027700644" observedRunningTime="2026-01-26 16:18:24.46069073 +0000 UTC m=+779.621962999" watchObservedRunningTime="2026-01-26 16:18:24.462676867 +0000 UTC m=+779.623949136" Jan 26 16:18:27 crc kubenswrapper[4680]: I0126 16:18:27.425518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" event={"ID":"c98c7bbd-7d35-463f-a9c2-f959a1368ff8","Type":"ContainerStarted","Data":"82dd58fc6ef9d3d9c6a6ab1f9e6c4c5eeb8cc46c20e76430dd3ab8f4a46c7e89"} Jan 26 16:18:27 crc kubenswrapper[4680]: I0126 16:18:27.452139 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-zfqmx" podStartSLOduration=2.391230459 podStartE2EDuration="8.452039278s" podCreationTimestamp="2026-01-26 16:18:19 +0000 UTC" firstStartedPulling="2026-01-26 16:18:20.407740253 +0000 UTC m=+775.569012522" lastFinishedPulling="2026-01-26 16:18:26.468549072 +0000 UTC m=+781.629821341" observedRunningTime="2026-01-26 16:18:27.449851705 +0000 UTC m=+782.611123994" watchObservedRunningTime="2026-01-26 16:18:27.452039278 +0000 UTC m=+782.613311587" Jan 26 16:18:29 crc kubenswrapper[4680]: I0126 16:18:29.816548 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-nqg55" Jan 26 16:18:30 crc kubenswrapper[4680]: I0126 16:18:30.166019 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:30 crc kubenswrapper[4680]: I0126 16:18:30.166087 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:30 crc kubenswrapper[4680]: I0126 16:18:30.170546 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:30 crc kubenswrapper[4680]: I0126 16:18:30.443418 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fc5c8f49-48gmj" Jan 26 16:18:30 crc kubenswrapper[4680]: I0126 16:18:30.516419 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z2kjp"] Jan 26 16:18:39 crc kubenswrapper[4680]: I0126 16:18:39.773683 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" Jan 26 16:18:46 crc kubenswrapper[4680]: I0126 16:18:46.981136 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:18:46 crc kubenswrapper[4680]: I0126 16:18:46.981797 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.776268 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg"] Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.779271 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.785547 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg"] Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.788452 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.934459 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlzm2\" (UniqueName: \"kubernetes.io/projected/d0297a3e-ef6e-40c4-951c-2483bf5deac7-kube-api-access-zlzm2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.934542 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:52 crc kubenswrapper[4680]: I0126 16:18:52.934585 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.035243 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.035348 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzm2\" (UniqueName: \"kubernetes.io/projected/d0297a3e-ef6e-40c4-951c-2483bf5deac7-kube-api-access-zlzm2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.035400 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.035886 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.035943 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.053464 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzm2\" (UniqueName: \"kubernetes.io/projected/d0297a3e-ef6e-40c4-951c-2483bf5deac7-kube-api-access-zlzm2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.098303 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.280021 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg"] Jan 26 16:18:53 crc kubenswrapper[4680]: I0126 16:18:53.558504 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" event={"ID":"d0297a3e-ef6e-40c4-951c-2483bf5deac7","Type":"ContainerStarted","Data":"466ec9da762139037962200f1040cc84240def17ddd0205cf6b7359ec54ecb2a"} Jan 26 16:18:54 crc kubenswrapper[4680]: I0126 16:18:54.565312 4680 generic.go:334] "Generic (PLEG): container finished" podID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerID="caa373a541b655180d26cb2bdad8dda1b710f303efa1786ff54b9ecfbb24de06" exitCode=0 Jan 26 16:18:54 crc kubenswrapper[4680]: I0126 16:18:54.565357 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" event={"ID":"d0297a3e-ef6e-40c4-951c-2483bf5deac7","Type":"ContainerDied","Data":"caa373a541b655180d26cb2bdad8dda1b710f303efa1786ff54b9ecfbb24de06"} Jan 26 16:18:55 crc kubenswrapper[4680]: I0126 16:18:55.596962 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-z2kjp" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerName="console" containerID="cri-o://bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996" gracePeriod=15 Jan 26 16:18:55 crc kubenswrapper[4680]: I0126 16:18:55.923475 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z2kjp_9f58b269-9b27-441e-bd05-b99b435c29c9/console/0.log" Jan 26 16:18:55 crc kubenswrapper[4680]: I0126 16:18:55.923848 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073617 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-oauth-serving-cert\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073674 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-trusted-ca-bundle\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073707 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-serving-cert\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073741 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pq4\" (UniqueName: \"kubernetes.io/projected/9f58b269-9b27-441e-bd05-b99b435c29c9-kube-api-access-l4pq4\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073804 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-oauth-config\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073834 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-console-config\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.073864 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-service-ca\") pod \"9f58b269-9b27-441e-bd05-b99b435c29c9\" (UID: \"9f58b269-9b27-441e-bd05-b99b435c29c9\") " Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.074495 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.074515 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-service-ca" (OuterVolumeSpecName: "service-ca") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.074529 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.074816 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-console-config" (OuterVolumeSpecName: "console-config") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.082676 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.084421 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.088304 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f58b269-9b27-441e-bd05-b99b435c29c9-kube-api-access-l4pq4" (OuterVolumeSpecName: "kube-api-access-l4pq4") pod "9f58b269-9b27-441e-bd05-b99b435c29c9" (UID: "9f58b269-9b27-441e-bd05-b99b435c29c9"). InnerVolumeSpecName "kube-api-access-l4pq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175165 4680 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175563 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175579 4680 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175588 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4pq4\" (UniqueName: \"kubernetes.io/projected/9f58b269-9b27-441e-bd05-b99b435c29c9-kube-api-access-l4pq4\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175598 4680 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f58b269-9b27-441e-bd05-b99b435c29c9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175606 4680 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.175615 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f58b269-9b27-441e-bd05-b99b435c29c9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.575364 4680 generic.go:334] "Generic (PLEG): container finished" podID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerID="11aea5846d82c0293ffbdb93b5352d97532e00b0bbf33e4fb52610783dbf141d" exitCode=0 Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.575437 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" event={"ID":"d0297a3e-ef6e-40c4-951c-2483bf5deac7","Type":"ContainerDied","Data":"11aea5846d82c0293ffbdb93b5352d97532e00b0bbf33e4fb52610783dbf141d"} Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.577054 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z2kjp_9f58b269-9b27-441e-bd05-b99b435c29c9/console/0.log" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.577137 4680 generic.go:334] "Generic (PLEG): container finished" podID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerID="bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996" exitCode=2 Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.577163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z2kjp" event={"ID":"9f58b269-9b27-441e-bd05-b99b435c29c9","Type":"ContainerDied","Data":"bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996"} Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.577180 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z2kjp" event={"ID":"9f58b269-9b27-441e-bd05-b99b435c29c9","Type":"ContainerDied","Data":"56ca29dd55bff735203b3b3f30b15e5e0de1db13460dbbbb64103b87fd33d604"} Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.577197 4680 scope.go:117] "RemoveContainer" containerID="bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.577239 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z2kjp" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.602357 4680 scope.go:117] "RemoveContainer" containerID="bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996" Jan 26 16:18:56 crc kubenswrapper[4680]: E0126 16:18:56.602770 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996\": container with ID starting with bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996 not found: ID does not exist" containerID="bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.602802 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996"} err="failed to get container status \"bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996\": rpc error: code = NotFound desc = could not find container \"bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996\": container with ID starting with bc11087a521732c9a0090dd7c7d8e1415d53d6846403ecac1f23165db540e996 not found: ID does not exist" Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.609999 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z2kjp"] Jan 26 16:18:56 crc kubenswrapper[4680]: I0126 16:18:56.626322 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-z2kjp"] Jan 26 16:18:57 crc kubenswrapper[4680]: I0126 16:18:57.176146 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" path="/var/lib/kubelet/pods/9f58b269-9b27-441e-bd05-b99b435c29c9/volumes" Jan 26 16:18:57 crc kubenswrapper[4680]: I0126 16:18:57.586505 4680 generic.go:334] "Generic (PLEG): container finished" podID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerID="38f6612a6e660ce430e95a92fd7b0d958077576f88a67b526250099aec320966" exitCode=0 Jan 26 16:18:57 crc kubenswrapper[4680]: I0126 16:18:57.586552 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" event={"ID":"d0297a3e-ef6e-40c4-951c-2483bf5deac7","Type":"ContainerDied","Data":"38f6612a6e660ce430e95a92fd7b0d958077576f88a67b526250099aec320966"} Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.787424 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.811763 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-util\") pod \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.812011 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-bundle\") pod \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.812112 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlzm2\" (UniqueName: \"kubernetes.io/projected/d0297a3e-ef6e-40c4-951c-2483bf5deac7-kube-api-access-zlzm2\") pod \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\" (UID: \"d0297a3e-ef6e-40c4-951c-2483bf5deac7\") " Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.814802 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-bundle" (OuterVolumeSpecName: "bundle") pod "d0297a3e-ef6e-40c4-951c-2483bf5deac7" (UID: "d0297a3e-ef6e-40c4-951c-2483bf5deac7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.822654 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0297a3e-ef6e-40c4-951c-2483bf5deac7-kube-api-access-zlzm2" (OuterVolumeSpecName: "kube-api-access-zlzm2") pod "d0297a3e-ef6e-40c4-951c-2483bf5deac7" (UID: "d0297a3e-ef6e-40c4-951c-2483bf5deac7"). InnerVolumeSpecName "kube-api-access-zlzm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.826597 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-util" (OuterVolumeSpecName: "util") pod "d0297a3e-ef6e-40c4-951c-2483bf5deac7" (UID: "d0297a3e-ef6e-40c4-951c-2483bf5deac7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.913437 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlzm2\" (UniqueName: \"kubernetes.io/projected/d0297a3e-ef6e-40c4-951c-2483bf5deac7-kube-api-access-zlzm2\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.913480 4680 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-util\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:58 crc kubenswrapper[4680]: I0126 16:18:58.913496 4680 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0297a3e-ef6e-40c4-951c-2483bf5deac7-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:18:59 crc kubenswrapper[4680]: I0126 16:18:59.598694 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" event={"ID":"d0297a3e-ef6e-40c4-951c-2483bf5deac7","Type":"ContainerDied","Data":"466ec9da762139037962200f1040cc84240def17ddd0205cf6b7359ec54ecb2a"} Jan 26 16:18:59 crc kubenswrapper[4680]: I0126 16:18:59.598740 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466ec9da762139037962200f1040cc84240def17ddd0205cf6b7359ec54ecb2a" Jan 26 16:18:59 crc kubenswrapper[4680]: I0126 16:18:59.599060 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wtg" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.988548 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg"] Jan 26 16:19:08 crc kubenswrapper[4680]: E0126 16:19:08.989291 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerName="console" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989304 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerName="console" Jan 26 16:19:08 crc kubenswrapper[4680]: E0126 16:19:08.989313 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="extract" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989323 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="extract" Jan 26 16:19:08 crc kubenswrapper[4680]: E0126 16:19:08.989344 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="util" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989350 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="util" Jan 26 16:19:08 crc kubenswrapper[4680]: E0126 16:19:08.989357 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="pull" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989362 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="pull" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989483 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0297a3e-ef6e-40c4-951c-2483bf5deac7" containerName="extract" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989498 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f58b269-9b27-441e-bd05-b99b435c29c9" containerName="console" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.989879 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.993427 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.993619 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.994279 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.995195 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lmlkw" Jan 26 16:19:08 crc kubenswrapper[4680]: I0126 16:19:08.995347 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.014215 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg"] Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.060384 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-webhook-cert\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.060454 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvl9\" (UniqueName: \"kubernetes.io/projected/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-kube-api-access-jpvl9\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.060481 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-apiservice-cert\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.161889 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-webhook-cert\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.161931 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvl9\" (UniqueName: \"kubernetes.io/projected/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-kube-api-access-jpvl9\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.161952 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-apiservice-cert\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.169431 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-apiservice-cert\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.169612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-webhook-cert\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.199728 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvl9\" (UniqueName: \"kubernetes.io/projected/1fd34661-ceb8-4b7a-a3f7-deedab72f5dc-kube-api-access-jpvl9\") pod \"metallb-operator-controller-manager-57c4fd656d-gzjfg\" (UID: \"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc\") " pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.305305 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.473384 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc"] Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.474543 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.490282 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.490345 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nhgxv" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.490552 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.524665 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc"] Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.676048 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13d27b97-b926-4a78-991d-e969612ff055-apiservice-cert\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.676122 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13d27b97-b926-4a78-991d-e969612ff055-webhook-cert\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.676160 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5pp\" (UniqueName: \"kubernetes.io/projected/13d27b97-b926-4a78-991d-e969612ff055-kube-api-access-9z5pp\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.727170 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg"] Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.777175 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13d27b97-b926-4a78-991d-e969612ff055-apiservice-cert\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.777443 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13d27b97-b926-4a78-991d-e969612ff055-webhook-cert\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.777554 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5pp\" (UniqueName: \"kubernetes.io/projected/13d27b97-b926-4a78-991d-e969612ff055-kube-api-access-9z5pp\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.781927 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13d27b97-b926-4a78-991d-e969612ff055-apiservice-cert\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.785688 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13d27b97-b926-4a78-991d-e969612ff055-webhook-cert\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.796936 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5pp\" (UniqueName: \"kubernetes.io/projected/13d27b97-b926-4a78-991d-e969612ff055-kube-api-access-9z5pp\") pod \"metallb-operator-webhook-server-6f6cc977fc-qfzhc\" (UID: \"13d27b97-b926-4a78-991d-e969612ff055\") " pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:09 crc kubenswrapper[4680]: I0126 16:19:09.808060 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:10 crc kubenswrapper[4680]: I0126 16:19:10.132762 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc"] Jan 26 16:19:10 crc kubenswrapper[4680]: W0126 16:19:10.142854 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d27b97_b926_4a78_991d_e969612ff055.slice/crio-b44f3a02de2d83b719045ead49295630a2b0d33fdb48a00c1675900f8c8790f9 WatchSource:0}: Error finding container b44f3a02de2d83b719045ead49295630a2b0d33fdb48a00c1675900f8c8790f9: Status 404 returned error can't find the container with id b44f3a02de2d83b719045ead49295630a2b0d33fdb48a00c1675900f8c8790f9 Jan 26 16:19:10 crc kubenswrapper[4680]: I0126 16:19:10.654331 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" event={"ID":"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc","Type":"ContainerStarted","Data":"03d78af9d68895b8d6a29204c46b6fe0d179b233fb26acce138f8bd65133fe57"} Jan 26 16:19:10 crc kubenswrapper[4680]: I0126 16:19:10.655837 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" event={"ID":"13d27b97-b926-4a78-991d-e969612ff055","Type":"ContainerStarted","Data":"b44f3a02de2d83b719045ead49295630a2b0d33fdb48a00c1675900f8c8790f9"} Jan 26 16:19:16 crc kubenswrapper[4680]: I0126 16:19:16.980475 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:19:16 crc kubenswrapper[4680]: I0126 16:19:16.981048 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:19:16 crc kubenswrapper[4680]: I0126 16:19:16.981168 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:19:16 crc kubenswrapper[4680]: I0126 16:19:16.981691 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6ea51382c2431c8381beef85985fd3da79a05f0dd4a6e879c92eee56a2edc94"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:19:16 crc kubenswrapper[4680]: I0126 16:19:16.981749 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://e6ea51382c2431c8381beef85985fd3da79a05f0dd4a6e879c92eee56a2edc94" gracePeriod=600 Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.697354 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" event={"ID":"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc","Type":"ContainerStarted","Data":"248bbbbfd3edd726ecb67576229c0c059a5d15cc4accb826e0dfab80afed00be"} Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.698421 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.700470 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="e6ea51382c2431c8381beef85985fd3da79a05f0dd4a6e879c92eee56a2edc94" exitCode=0 Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.700518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"e6ea51382c2431c8381beef85985fd3da79a05f0dd4a6e879c92eee56a2edc94"} Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.700538 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"079abaf394e020c632241b295deb36fe6541d49138372b5520640414dceac2e9"} Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.700554 4680 scope.go:117] "RemoveContainer" containerID="b59ba864e8ff8bde338fb3aa885ce44dea0664465957bc0e21e513197f5844ec" Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.703997 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" event={"ID":"13d27b97-b926-4a78-991d-e969612ff055","Type":"ContainerStarted","Data":"9961baa9cd092689fc5ace10a2255fcea469bbc96b77854c86a7daf7ffe1d532"} Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.704295 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.726311 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" podStartSLOduration=2.706482984 podStartE2EDuration="9.726292612s" podCreationTimestamp="2026-01-26 16:19:08 +0000 UTC" firstStartedPulling="2026-01-26 16:19:09.737830407 +0000 UTC m=+824.899102676" lastFinishedPulling="2026-01-26 16:19:16.757640045 +0000 UTC m=+831.918912304" observedRunningTime="2026-01-26 16:19:17.722096891 +0000 UTC m=+832.883369160" watchObservedRunningTime="2026-01-26 16:19:17.726292612 +0000 UTC m=+832.887564881" Jan 26 16:19:17 crc kubenswrapper[4680]: I0126 16:19:17.774413 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" podStartSLOduration=2.151502535 podStartE2EDuration="8.77439785s" podCreationTimestamp="2026-01-26 16:19:09 +0000 UTC" firstStartedPulling="2026-01-26 16:19:10.145713536 +0000 UTC m=+825.306985805" lastFinishedPulling="2026-01-26 16:19:16.768608851 +0000 UTC m=+831.929881120" observedRunningTime="2026-01-26 16:19:17.75603778 +0000 UTC m=+832.917310049" watchObservedRunningTime="2026-01-26 16:19:17.77439785 +0000 UTC m=+832.935670119" Jan 26 16:19:29 crc kubenswrapper[4680]: I0126 16:19:29.814946 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" Jan 26 16:19:49 crc kubenswrapper[4680]: I0126 16:19:49.307392 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.092450 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8"] Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.093328 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.095602 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.096669 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cjd7d" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.098004 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fppvg"] Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.100855 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.104716 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.106397 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.107162 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8"] Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.185318 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-mqkb5"] Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.186430 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.186430 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-conf\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.186975 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-sockets\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187118 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx6cp\" (UniqueName: \"kubernetes.io/projected/6fcb2787-4ea2-498d-9d2b-92577f4e0640-kube-api-access-hx6cp\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187244 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fcb2787-4ea2-498d-9d2b-92577f4e0640-metrics-certs\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187350 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-reloader\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187473 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4cdf\" (UniqueName: \"kubernetes.io/projected/ae5969bc-48f4-499f-9ca5-6858279a47d6-kube-api-access-p4cdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-v9qd8\" (UID: \"ae5969bc-48f4-499f-9ca5-6858279a47d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187642 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-metrics\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187781 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-startup\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.187943 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae5969bc-48f4-499f-9ca5-6858279a47d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-v9qd8\" (UID: \"ae5969bc-48f4-499f-9ca5-6858279a47d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.192366 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.192526 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rdsfj" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.192387 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.192659 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.209765 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-jzg2h"] Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.211750 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.213821 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.237845 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jzg2h"] Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289317 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbtf\" (UniqueName: \"kubernetes.io/projected/a30260c8-eca8-456a-a94d-61839973f6ee-kube-api-access-9nbtf\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289358 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-metrics\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289381 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/71470be4-25d6-4dab-8fa6-3850938403e2-metallb-excludel2\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289402 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-metrics-certs\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289547 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-startup\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289652 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae5969bc-48f4-499f-9ca5-6858279a47d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-v9qd8\" (UID: \"ae5969bc-48f4-499f-9ca5-6858279a47d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289717 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-conf\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289780 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-metrics-certs\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289815 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-sockets\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289834 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-metrics\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289861 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx6cp\" (UniqueName: \"kubernetes.io/projected/6fcb2787-4ea2-498d-9d2b-92577f4e0640-kube-api-access-hx6cp\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289890 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j896q\" (UniqueName: \"kubernetes.io/projected/71470be4-25d6-4dab-8fa6-3850938403e2-kube-api-access-j896q\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289913 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.289948 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fcb2787-4ea2-498d-9d2b-92577f4e0640-metrics-certs\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290136 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-reloader\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290152 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-conf\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290185 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4cdf\" (UniqueName: \"kubernetes.io/projected/ae5969bc-48f4-499f-9ca5-6858279a47d6-kube-api-access-p4cdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-v9qd8\" (UID: \"ae5969bc-48f4-499f-9ca5-6858279a47d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290233 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-sockets\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290233 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-cert\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290427 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fcb2787-4ea2-498d-9d2b-92577f4e0640-reloader\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.290541 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fcb2787-4ea2-498d-9d2b-92577f4e0640-frr-startup\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.297803 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fcb2787-4ea2-498d-9d2b-92577f4e0640-metrics-certs\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.308871 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx6cp\" (UniqueName: \"kubernetes.io/projected/6fcb2787-4ea2-498d-9d2b-92577f4e0640-kube-api-access-hx6cp\") pod \"frr-k8s-fppvg\" (UID: \"6fcb2787-4ea2-498d-9d2b-92577f4e0640\") " pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.311212 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4cdf\" (UniqueName: \"kubernetes.io/projected/ae5969bc-48f4-499f-9ca5-6858279a47d6-kube-api-access-p4cdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-v9qd8\" (UID: \"ae5969bc-48f4-499f-9ca5-6858279a47d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.312928 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae5969bc-48f4-499f-9ca5-6858279a47d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-v9qd8\" (UID: \"ae5969bc-48f4-499f-9ca5-6858279a47d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.391962 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-metrics-certs\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392025 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j896q\" (UniqueName: \"kubernetes.io/projected/71470be4-25d6-4dab-8fa6-3850938403e2-kube-api-access-j896q\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392049 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392095 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-cert\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392119 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nbtf\" (UniqueName: \"kubernetes.io/projected/a30260c8-eca8-456a-a94d-61839973f6ee-kube-api-access-9nbtf\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: E0126 16:19:50.392133 4680 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 16:19:50 crc kubenswrapper[4680]: E0126 16:19:50.392191 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-metrics-certs podName:a30260c8-eca8-456a-a94d-61839973f6ee nodeName:}" failed. No retries permitted until 2026-01-26 16:19:50.892172522 +0000 UTC m=+866.053444791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-metrics-certs") pod "controller-6968d8fdc4-jzg2h" (UID: "a30260c8-eca8-456a-a94d-61839973f6ee") : secret "controller-certs-secret" not found Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392138 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/71470be4-25d6-4dab-8fa6-3850938403e2-metallb-excludel2\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392368 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-metrics-certs\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: E0126 16:19:50.392522 4680 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 16:19:50 crc kubenswrapper[4680]: E0126 16:19:50.392577 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist podName:71470be4-25d6-4dab-8fa6-3850938403e2 nodeName:}" failed. No retries permitted until 2026-01-26 16:19:50.892562233 +0000 UTC m=+866.053834502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist") pod "speaker-mqkb5" (UID: "71470be4-25d6-4dab-8fa6-3850938403e2") : secret "metallb-memberlist" not found Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.392708 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/71470be4-25d6-4dab-8fa6-3850938403e2-metallb-excludel2\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.394388 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.396002 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-metrics-certs\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.405639 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-cert\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.409763 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j896q\" (UniqueName: \"kubernetes.io/projected/71470be4-25d6-4dab-8fa6-3850938403e2-kube-api-access-j896q\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.414164 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.414584 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nbtf\" (UniqueName: \"kubernetes.io/projected/a30260c8-eca8-456a-a94d-61839973f6ee-kube-api-access-9nbtf\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.425649 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fppvg" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.620314 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8"] Jan 26 16:19:50 crc kubenswrapper[4680]: W0126 16:19:50.624084 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae5969bc_48f4_499f_9ca5_6858279a47d6.slice/crio-5a696dda00e8b7a70844293510ef80b4d972a8e5237814956166aff3799ba60f WatchSource:0}: Error finding container 5a696dda00e8b7a70844293510ef80b4d972a8e5237814956166aff3799ba60f: Status 404 returned error can't find the container with id 5a696dda00e8b7a70844293510ef80b4d972a8e5237814956166aff3799ba60f Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.884900 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"07cc5e5a9476b1d4c555916418c417dd98e92f245a1c0dd726a80cffcce759a1"} Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.886204 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" event={"ID":"ae5969bc-48f4-499f-9ca5-6858279a47d6","Type":"ContainerStarted","Data":"5a696dda00e8b7a70844293510ef80b4d972a8e5237814956166aff3799ba60f"} Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.900418 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-metrics-certs\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.900485 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:50 crc kubenswrapper[4680]: E0126 16:19:50.900663 4680 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 16:19:50 crc kubenswrapper[4680]: E0126 16:19:50.900718 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist podName:71470be4-25d6-4dab-8fa6-3850938403e2 nodeName:}" failed. No retries permitted until 2026-01-26 16:19:51.900699313 +0000 UTC m=+867.061971592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist") pod "speaker-mqkb5" (UID: "71470be4-25d6-4dab-8fa6-3850938403e2") : secret "metallb-memberlist" not found Jan 26 16:19:50 crc kubenswrapper[4680]: I0126 16:19:50.906780 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a30260c8-eca8-456a-a94d-61839973f6ee-metrics-certs\") pod \"controller-6968d8fdc4-jzg2h\" (UID: \"a30260c8-eca8-456a-a94d-61839973f6ee\") " pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.127617 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.373941 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jzg2h"] Jan 26 16:19:51 crc kubenswrapper[4680]: W0126 16:19:51.400875 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda30260c8_eca8_456a_a94d_61839973f6ee.slice/crio-18fcb5da7bb5e9e136c5aead45864410d8043f0175f44bc6a4cba8e547f808dc WatchSource:0}: Error finding container 18fcb5da7bb5e9e136c5aead45864410d8043f0175f44bc6a4cba8e547f808dc: Status 404 returned error can't find the container with id 18fcb5da7bb5e9e136c5aead45864410d8043f0175f44bc6a4cba8e547f808dc Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.897164 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jzg2h" event={"ID":"a30260c8-eca8-456a-a94d-61839973f6ee","Type":"ContainerStarted","Data":"99a11ef42d9cb57aa3f4c493e0b5050c796825fbe81e590459625c4ae40f472a"} Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.897442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jzg2h" event={"ID":"a30260c8-eca8-456a-a94d-61839973f6ee","Type":"ContainerStarted","Data":"dadb4c14ce2661f2cb57973b81cde092338307cb701a2a3248fd22ba9bc833b0"} Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.897453 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jzg2h" event={"ID":"a30260c8-eca8-456a-a94d-61839973f6ee","Type":"ContainerStarted","Data":"18fcb5da7bb5e9e136c5aead45864410d8043f0175f44bc6a4cba8e547f808dc"} Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.897467 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.917878 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.923641 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-jzg2h" podStartSLOduration=1.923617301 podStartE2EDuration="1.923617301s" podCreationTimestamp="2026-01-26 16:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:19:51.915236764 +0000 UTC m=+867.076509033" watchObservedRunningTime="2026-01-26 16:19:51.923617301 +0000 UTC m=+867.084889570" Jan 26 16:19:51 crc kubenswrapper[4680]: I0126 16:19:51.930797 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/71470be4-25d6-4dab-8fa6-3850938403e2-memberlist\") pod \"speaker-mqkb5\" (UID: \"71470be4-25d6-4dab-8fa6-3850938403e2\") " pod="metallb-system/speaker-mqkb5" Jan 26 16:19:52 crc kubenswrapper[4680]: I0126 16:19:52.000919 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mqkb5" Jan 26 16:19:52 crc kubenswrapper[4680]: W0126 16:19:52.030915 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71470be4_25d6_4dab_8fa6_3850938403e2.slice/crio-b0819b70ea15091a70601b321770b61b3df76cb8173d569e326cd7e9f51ca4f9 WatchSource:0}: Error finding container b0819b70ea15091a70601b321770b61b3df76cb8173d569e326cd7e9f51ca4f9: Status 404 returned error can't find the container with id b0819b70ea15091a70601b321770b61b3df76cb8173d569e326cd7e9f51ca4f9 Jan 26 16:19:52 crc kubenswrapper[4680]: I0126 16:19:52.908480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mqkb5" event={"ID":"71470be4-25d6-4dab-8fa6-3850938403e2","Type":"ContainerStarted","Data":"eb72f7a597df441fe45d8066114872738f45c7a642f64713aa315c294ea64e42"} Jan 26 16:19:52 crc kubenswrapper[4680]: I0126 16:19:52.908531 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mqkb5" event={"ID":"71470be4-25d6-4dab-8fa6-3850938403e2","Type":"ContainerStarted","Data":"1d4017c33b42ccaad5d84d04bbea813b5ffb27f3d458f6064b1bcd591a96d506"} Jan 26 16:19:52 crc kubenswrapper[4680]: I0126 16:19:52.908544 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mqkb5" event={"ID":"71470be4-25d6-4dab-8fa6-3850938403e2","Type":"ContainerStarted","Data":"b0819b70ea15091a70601b321770b61b3df76cb8173d569e326cd7e9f51ca4f9"} Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.195792 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-mqkb5" podStartSLOduration=5.195773871 podStartE2EDuration="5.195773871s" podCreationTimestamp="2026-01-26 16:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:19:52.937697319 +0000 UTC m=+868.098969588" watchObservedRunningTime="2026-01-26 16:19:55.195773871 +0000 UTC m=+870.357046150" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.521031 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jt7s"] Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.522144 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.586991 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jt7s"] Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.592953 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-utilities\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.593132 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-catalog-content\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.593307 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwxp\" (UniqueName: \"kubernetes.io/projected/4424f944-d765-4fe7-ad0a-48a438cc6fca-kube-api-access-2fwxp\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.694376 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwxp\" (UniqueName: \"kubernetes.io/projected/4424f944-d765-4fe7-ad0a-48a438cc6fca-kube-api-access-2fwxp\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.694454 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-utilities\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.694501 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-catalog-content\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.695218 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-utilities\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.695407 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-catalog-content\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.724757 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwxp\" (UniqueName: \"kubernetes.io/projected/4424f944-d765-4fe7-ad0a-48a438cc6fca-kube-api-access-2fwxp\") pod \"community-operators-9jt7s\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.725823 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tkvl7"] Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.727055 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.738847 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkvl7"] Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.795965 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-utilities\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.796015 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-catalog-content\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.796056 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h4v\" (UniqueName: \"kubernetes.io/projected/e0c4197b-8c13-4bd7-9069-cf93833ad305-kube-api-access-j4h4v\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.839179 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.896911 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-catalog-content\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.896981 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4h4v\" (UniqueName: \"kubernetes.io/projected/e0c4197b-8c13-4bd7-9069-cf93833ad305-kube-api-access-j4h4v\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.897041 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-utilities\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.897589 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-utilities\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.897808 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-catalog-content\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:55 crc kubenswrapper[4680]: I0126 16:19:55.919724 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4h4v\" (UniqueName: \"kubernetes.io/projected/e0c4197b-8c13-4bd7-9069-cf93833ad305-kube-api-access-j4h4v\") pod \"certified-operators-tkvl7\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.082379 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.372439 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jt7s"] Jan 26 16:19:56 crc kubenswrapper[4680]: W0126 16:19:56.379200 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4424f944_d765_4fe7_ad0a_48a438cc6fca.slice/crio-5c3e718367a3c1ec4235076bf7d919789de3644a1fb2e652917eefffa01e95a6 WatchSource:0}: Error finding container 5c3e718367a3c1ec4235076bf7d919789de3644a1fb2e652917eefffa01e95a6: Status 404 returned error can't find the container with id 5c3e718367a3c1ec4235076bf7d919789de3644a1fb2e652917eefffa01e95a6 Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.610046 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkvl7"] Jan 26 16:19:56 crc kubenswrapper[4680]: W0126 16:19:56.620357 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0c4197b_8c13_4bd7_9069_cf93833ad305.slice/crio-60e44e551225b3be6bb464704904d7a6b89d4e84341c447a206507ffeeab8ad2 WatchSource:0}: Error finding container 60e44e551225b3be6bb464704904d7a6b89d4e84341c447a206507ffeeab8ad2: Status 404 returned error can't find the container with id 60e44e551225b3be6bb464704904d7a6b89d4e84341c447a206507ffeeab8ad2 Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.943138 4680 generic.go:334] "Generic (PLEG): container finished" podID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerID="9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898" exitCode=0 Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.943229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerDied","Data":"9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898"} Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.943285 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerStarted","Data":"5c3e718367a3c1ec4235076bf7d919789de3644a1fb2e652917eefffa01e95a6"} Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.945053 4680 generic.go:334] "Generic (PLEG): container finished" podID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerID="e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d" exitCode=0 Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.945097 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerDied","Data":"e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d"} Jan 26 16:19:56 crc kubenswrapper[4680]: I0126 16:19:56.945125 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerStarted","Data":"60e44e551225b3be6bb464704904d7a6b89d4e84341c447a206507ffeeab8ad2"} Jan 26 16:20:00 crc kubenswrapper[4680]: I0126 16:20:00.972970 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" event={"ID":"ae5969bc-48f4-499f-9ca5-6858279a47d6","Type":"ContainerStarted","Data":"ab0eb09e3c4290ca8bbba13c01a08751a9c6b1df820bb58439a096136ff6eb00"} Jan 26 16:20:00 crc kubenswrapper[4680]: I0126 16:20:00.974004 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:20:00 crc kubenswrapper[4680]: I0126 16:20:00.976272 4680 generic.go:334] "Generic (PLEG): container finished" podID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerID="fc7be7b49e5a7de2826df76603dd3eb610853391c25bba3cbe967cdc56907ca9" exitCode=0 Jan 26 16:20:00 crc kubenswrapper[4680]: I0126 16:20:00.976310 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerDied","Data":"fc7be7b49e5a7de2826df76603dd3eb610853391c25bba3cbe967cdc56907ca9"} Jan 26 16:20:00 crc kubenswrapper[4680]: I0126 16:20:00.978343 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerStarted","Data":"d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71"} Jan 26 16:20:00 crc kubenswrapper[4680]: I0126 16:20:00.990856 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podStartSLOduration=1.743784011 podStartE2EDuration="10.990843367s" podCreationTimestamp="2026-01-26 16:19:50 +0000 UTC" firstStartedPulling="2026-01-26 16:19:50.627388718 +0000 UTC m=+865.788660987" lastFinishedPulling="2026-01-26 16:19:59.874448074 +0000 UTC m=+875.035720343" observedRunningTime="2026-01-26 16:20:00.988755907 +0000 UTC m=+876.150028176" watchObservedRunningTime="2026-01-26 16:20:00.990843367 +0000 UTC m=+876.152115636" Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.134359 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-jzg2h" Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.929211 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h4c4s"] Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.930277 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.953343 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4c4s"] Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.984626 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerStarted","Data":"ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461"} Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.987755 4680 generic.go:334] "Generic (PLEG): container finished" podID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerID="b93831a2959d3b4f13de8a1aff5b63c7e2c0d08a95ef2e91fdabdef3397c77ba" exitCode=0 Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.988060 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerDied","Data":"b93831a2959d3b4f13de8a1aff5b63c7e2c0d08a95ef2e91fdabdef3397c77ba"} Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.996690 4680 generic.go:334] "Generic (PLEG): container finished" podID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerID="d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71" exitCode=0 Jan 26 16:20:01 crc kubenswrapper[4680]: I0126 16:20:01.997758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerDied","Data":"d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71"} Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.003445 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-mqkb5" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.018745 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-mqkb5" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.093578 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-catalog-content\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.093659 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6czs\" (UniqueName: \"kubernetes.io/projected/b66cea4d-261b-42bd-a600-f630e5ff07fb-kube-api-access-k6czs\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.093681 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-utilities\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.194354 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-catalog-content\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.194452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6czs\" (UniqueName: \"kubernetes.io/projected/b66cea4d-261b-42bd-a600-f630e5ff07fb-kube-api-access-k6czs\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.194470 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-utilities\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.195603 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-utilities\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.195635 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-catalog-content\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.220789 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6czs\" (UniqueName: \"kubernetes.io/projected/b66cea4d-261b-42bd-a600-f630e5ff07fb-kube-api-access-k6czs\") pod \"redhat-marketplace-h4c4s\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.245334 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:02 crc kubenswrapper[4680]: I0126 16:20:02.665372 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4c4s"] Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.010047 4680 generic.go:334] "Generic (PLEG): container finished" podID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerID="ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461" exitCode=0 Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.010481 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerDied","Data":"ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461"} Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.013239 4680 generic.go:334] "Generic (PLEG): container finished" podID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerID="a495452ab7ab41821f06903c84c43c8f0ffd9c031beb25538837ce04dc04da9f" exitCode=0 Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.013395 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerDied","Data":"a495452ab7ab41821f06903c84c43c8f0ffd9c031beb25538837ce04dc04da9f"} Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.024530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerStarted","Data":"d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc"} Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.027283 4680 generic.go:334] "Generic (PLEG): container finished" podID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerID="dd086eabe6cf49f4a55c7ba37c6dd4747be035537e4247d36061d9ee3ea008c7" exitCode=0 Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.028324 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4c4s" event={"ID":"b66cea4d-261b-42bd-a600-f630e5ff07fb","Type":"ContainerDied","Data":"dd086eabe6cf49f4a55c7ba37c6dd4747be035537e4247d36061d9ee3ea008c7"} Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.028394 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4c4s" event={"ID":"b66cea4d-261b-42bd-a600-f630e5ff07fb","Type":"ContainerStarted","Data":"7b5ef4c3554b8c10e9a79d41df075e87defae594d5ef5e23316dda835553b8b0"} Jan 26 16:20:03 crc kubenswrapper[4680]: I0126 16:20:03.073019 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jt7s" podStartSLOduration=5.069796753 podStartE2EDuration="8.073003011s" podCreationTimestamp="2026-01-26 16:19:55 +0000 UTC" firstStartedPulling="2026-01-26 16:19:59.763523585 +0000 UTC m=+874.924795854" lastFinishedPulling="2026-01-26 16:20:02.766729843 +0000 UTC m=+877.928002112" observedRunningTime="2026-01-26 16:20:03.071813287 +0000 UTC m=+878.233085586" watchObservedRunningTime="2026-01-26 16:20:03.073003011 +0000 UTC m=+878.234275280" Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.035797 4680 generic.go:334] "Generic (PLEG): container finished" podID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerID="37d7719621eeea749139eca5058f9fe3a55ae579f2877cbfac148c3baf8d5ac5" exitCode=0 Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.036162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4c4s" event={"ID":"b66cea4d-261b-42bd-a600-f630e5ff07fb","Type":"ContainerDied","Data":"37d7719621eeea749139eca5058f9fe3a55ae579f2877cbfac148c3baf8d5ac5"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.039464 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerStarted","Data":"9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.043143 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"7131c4ff634af4f1d39ba6adcf4fe578f7ecf801584d15f08ca312141cfddec0"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.043172 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"7293f902710d528fb1b1722a8d91d2a68b6db522c4adad760a2a6edd4b7781d1"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.043184 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"2a15765dca2a902f9a8d4f02815747d0e7bb2e70777a6f85bc512967242e8757"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.043195 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"b7b0d359c4a71dd6bc0e757298285d8a5875e1e8d2b5088b6bc13d716dd23bdf"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.043204 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"03cb5e522f15221c6c84868a0aab1d420b7e384b6cea2eb06a3c3b336f180e04"} Jan 26 16:20:04 crc kubenswrapper[4680]: I0126 16:20:04.078807 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tkvl7" podStartSLOduration=5.456351122 podStartE2EDuration="9.078788114s" podCreationTimestamp="2026-01-26 16:19:55 +0000 UTC" firstStartedPulling="2026-01-26 16:19:59.763566786 +0000 UTC m=+874.924839055" lastFinishedPulling="2026-01-26 16:20:03.386003768 +0000 UTC m=+878.547276047" observedRunningTime="2026-01-26 16:20:04.077413865 +0000 UTC m=+879.238686134" watchObservedRunningTime="2026-01-26 16:20:04.078788114 +0000 UTC m=+879.240060383" Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.053408 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fppvg" event={"ID":"6fcb2787-4ea2-498d-9d2b-92577f4e0640","Type":"ContainerStarted","Data":"245829dde4ef1745fe185c47c14edadef9244e179c2c51e040efc9fe9f4c6b78"} Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.426803 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fppvg" Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.466294 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fppvg" Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.488398 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fppvg" podStartSLOduration=6.187122203 podStartE2EDuration="15.488377743s" podCreationTimestamp="2026-01-26 16:19:50 +0000 UTC" firstStartedPulling="2026-01-26 16:19:50.559579619 +0000 UTC m=+865.720851888" lastFinishedPulling="2026-01-26 16:19:59.860835159 +0000 UTC m=+875.022107428" observedRunningTime="2026-01-26 16:20:05.088121526 +0000 UTC m=+880.249393795" watchObservedRunningTime="2026-01-26 16:20:05.488377743 +0000 UTC m=+880.649650012" Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.840321 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.840398 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:20:05 crc kubenswrapper[4680]: I0126 16:20:05.887405 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:20:06 crc kubenswrapper[4680]: I0126 16:20:06.060914 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4c4s" event={"ID":"b66cea4d-261b-42bd-a600-f630e5ff07fb","Type":"ContainerStarted","Data":"6964856ec16da7f636aa0647518c6e14178698a360305a93ab00b5fb67c6b92d"} Jan 26 16:20:06 crc kubenswrapper[4680]: I0126 16:20:06.062043 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fppvg" Jan 26 16:20:06 crc kubenswrapper[4680]: I0126 16:20:06.083186 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:20:06 crc kubenswrapper[4680]: I0126 16:20:06.083224 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:20:06 crc kubenswrapper[4680]: I0126 16:20:06.108603 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h4c4s" podStartSLOduration=3.294008664 podStartE2EDuration="5.108581985s" podCreationTimestamp="2026-01-26 16:20:01 +0000 UTC" firstStartedPulling="2026-01-26 16:20:03.029454638 +0000 UTC m=+878.190726907" lastFinishedPulling="2026-01-26 16:20:04.844027959 +0000 UTC m=+880.005300228" observedRunningTime="2026-01-26 16:20:06.108105071 +0000 UTC m=+881.269377340" watchObservedRunningTime="2026-01-26 16:20:06.108581985 +0000 UTC m=+881.269854254" Jan 26 16:20:06 crc kubenswrapper[4680]: I0126 16:20:06.128955 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.326014 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-97dr2"] Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.327274 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.331281 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cnsjr" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.332417 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.333328 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.348903 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-97dr2"] Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.407807 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzsc\" (UniqueName: \"kubernetes.io/projected/cba0da08-3a4e-425d-9bc0-4318cf5953be-kube-api-access-fzzsc\") pod \"openstack-operator-index-97dr2\" (UID: \"cba0da08-3a4e-425d-9bc0-4318cf5953be\") " pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.418501 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.508858 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzsc\" (UniqueName: \"kubernetes.io/projected/cba0da08-3a4e-425d-9bc0-4318cf5953be-kube-api-access-fzzsc\") pod \"openstack-operator-index-97dr2\" (UID: \"cba0da08-3a4e-425d-9bc0-4318cf5953be\") " pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.533290 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzsc\" (UniqueName: \"kubernetes.io/projected/cba0da08-3a4e-425d-9bc0-4318cf5953be-kube-api-access-fzzsc\") pod \"openstack-operator-index-97dr2\" (UID: \"cba0da08-3a4e-425d-9bc0-4318cf5953be\") " pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:10 crc kubenswrapper[4680]: I0126 16:20:10.679610 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:11 crc kubenswrapper[4680]: I0126 16:20:11.181915 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-97dr2"] Jan 26 16:20:12 crc kubenswrapper[4680]: I0126 16:20:12.102136 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-97dr2" event={"ID":"cba0da08-3a4e-425d-9bc0-4318cf5953be","Type":"ContainerStarted","Data":"ff8d4991c5ec9a20f3cd4f0d75027fc97abbd6ea8f78ba60a6e2778e17d1abdb"} Jan 26 16:20:12 crc kubenswrapper[4680]: I0126 16:20:12.245816 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:12 crc kubenswrapper[4680]: I0126 16:20:12.245871 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:12 crc kubenswrapper[4680]: I0126 16:20:12.312677 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:13 crc kubenswrapper[4680]: I0126 16:20:13.182086 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:15 crc kubenswrapper[4680]: I0126 16:20:15.876268 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:20:16 crc kubenswrapper[4680]: I0126 16:20:16.120294 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:20:17 crc kubenswrapper[4680]: I0126 16:20:17.315017 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4c4s"] Jan 26 16:20:17 crc kubenswrapper[4680]: I0126 16:20:17.316844 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h4c4s" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="registry-server" containerID="cri-o://6964856ec16da7f636aa0647518c6e14178698a360305a93ab00b5fb67c6b92d" gracePeriod=2 Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.150337 4680 generic.go:334] "Generic (PLEG): container finished" podID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerID="6964856ec16da7f636aa0647518c6e14178698a360305a93ab00b5fb67c6b92d" exitCode=0 Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.150417 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4c4s" event={"ID":"b66cea4d-261b-42bd-a600-f630e5ff07fb","Type":"ContainerDied","Data":"6964856ec16da7f636aa0647518c6e14178698a360305a93ab00b5fb67c6b92d"} Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.152155 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-97dr2" event={"ID":"cba0da08-3a4e-425d-9bc0-4318cf5953be","Type":"ContainerStarted","Data":"75e73736b07eb0f8df5cb0486af56606ff77dc108af9e4940e23a0e3ab77bf24"} Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.167186 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-97dr2" podStartSLOduration=2.622336728 podStartE2EDuration="8.167164111s" podCreationTimestamp="2026-01-26 16:20:10 +0000 UTC" firstStartedPulling="2026-01-26 16:20:11.183868661 +0000 UTC m=+886.345140950" lastFinishedPulling="2026-01-26 16:20:16.728696064 +0000 UTC m=+891.889968333" observedRunningTime="2026-01-26 16:20:18.164112705 +0000 UTC m=+893.325384974" watchObservedRunningTime="2026-01-26 16:20:18.167164111 +0000 UTC m=+893.328436380" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.637612 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.829511 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-utilities\") pod \"b66cea4d-261b-42bd-a600-f630e5ff07fb\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.829652 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-catalog-content\") pod \"b66cea4d-261b-42bd-a600-f630e5ff07fb\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.829679 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6czs\" (UniqueName: \"kubernetes.io/projected/b66cea4d-261b-42bd-a600-f630e5ff07fb-kube-api-access-k6czs\") pod \"b66cea4d-261b-42bd-a600-f630e5ff07fb\" (UID: \"b66cea4d-261b-42bd-a600-f630e5ff07fb\") " Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.831442 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-utilities" (OuterVolumeSpecName: "utilities") pod "b66cea4d-261b-42bd-a600-f630e5ff07fb" (UID: "b66cea4d-261b-42bd-a600-f630e5ff07fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.842240 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66cea4d-261b-42bd-a600-f630e5ff07fb-kube-api-access-k6czs" (OuterVolumeSpecName: "kube-api-access-k6czs") pod "b66cea4d-261b-42bd-a600-f630e5ff07fb" (UID: "b66cea4d-261b-42bd-a600-f630e5ff07fb"). InnerVolumeSpecName "kube-api-access-k6czs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.851451 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b66cea4d-261b-42bd-a600-f630e5ff07fb" (UID: "b66cea4d-261b-42bd-a600-f630e5ff07fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.931757 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.932033 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66cea4d-261b-42bd-a600-f630e5ff07fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:18 crc kubenswrapper[4680]: I0126 16:20:18.932047 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6czs\" (UniqueName: \"kubernetes.io/projected/b66cea4d-261b-42bd-a600-f630e5ff07fb-kube-api-access-k6czs\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.160209 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4c4s" event={"ID":"b66cea4d-261b-42bd-a600-f630e5ff07fb","Type":"ContainerDied","Data":"7b5ef4c3554b8c10e9a79d41df075e87defae594d5ef5e23316dda835553b8b0"} Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.160267 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4c4s" Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.160486 4680 scope.go:117] "RemoveContainer" containerID="6964856ec16da7f636aa0647518c6e14178698a360305a93ab00b5fb67c6b92d" Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.177956 4680 scope.go:117] "RemoveContainer" containerID="37d7719621eeea749139eca5058f9fe3a55ae579f2877cbfac148c3baf8d5ac5" Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.198305 4680 scope.go:117] "RemoveContainer" containerID="dd086eabe6cf49f4a55c7ba37c6dd4747be035537e4247d36061d9ee3ea008c7" Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.206217 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4c4s"] Jan 26 16:20:19 crc kubenswrapper[4680]: I0126 16:20:19.213271 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4c4s"] Jan 26 16:20:20 crc kubenswrapper[4680]: I0126 16:20:20.429114 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fppvg" Jan 26 16:20:20 crc kubenswrapper[4680]: I0126 16:20:20.680200 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:20 crc kubenswrapper[4680]: I0126 16:20:20.680234 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:20 crc kubenswrapper[4680]: I0126 16:20:20.704839 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:20 crc kubenswrapper[4680]: I0126 16:20:20.719684 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jt7s"] Jan 26 16:20:20 crc kubenswrapper[4680]: I0126 16:20:20.720032 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jt7s" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="registry-server" containerID="cri-o://d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc" gracePeriod=2 Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.115024 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tkvl7"] Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.115658 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tkvl7" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="registry-server" containerID="cri-o://9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6" gracePeriod=2 Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.141949 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.181490 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" path="/var/lib/kubelet/pods/b66cea4d-261b-42bd-a600-f630e5ff07fb/volumes" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.184803 4680 generic.go:334] "Generic (PLEG): container finished" podID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerID="d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc" exitCode=0 Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.185390 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jt7s" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.207553 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerDied","Data":"d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc"} Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.207595 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jt7s" event={"ID":"4424f944-d765-4fe7-ad0a-48a438cc6fca","Type":"ContainerDied","Data":"5c3e718367a3c1ec4235076bf7d919789de3644a1fb2e652917eefffa01e95a6"} Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.207616 4680 scope.go:117] "RemoveContainer" containerID="d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.233874 4680 scope.go:117] "RemoveContainer" containerID="d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.261515 4680 scope.go:117] "RemoveContainer" containerID="9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.264456 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-catalog-content\") pod \"4424f944-d765-4fe7-ad0a-48a438cc6fca\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.264508 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-utilities\") pod \"4424f944-d765-4fe7-ad0a-48a438cc6fca\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.264605 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fwxp\" (UniqueName: \"kubernetes.io/projected/4424f944-d765-4fe7-ad0a-48a438cc6fca-kube-api-access-2fwxp\") pod \"4424f944-d765-4fe7-ad0a-48a438cc6fca\" (UID: \"4424f944-d765-4fe7-ad0a-48a438cc6fca\") " Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.268977 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-utilities" (OuterVolumeSpecName: "utilities") pod "4424f944-d765-4fe7-ad0a-48a438cc6fca" (UID: "4424f944-d765-4fe7-ad0a-48a438cc6fca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.273809 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4424f944-d765-4fe7-ad0a-48a438cc6fca-kube-api-access-2fwxp" (OuterVolumeSpecName: "kube-api-access-2fwxp") pod "4424f944-d765-4fe7-ad0a-48a438cc6fca" (UID: "4424f944-d765-4fe7-ad0a-48a438cc6fca"). InnerVolumeSpecName "kube-api-access-2fwxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.294639 4680 scope.go:117] "RemoveContainer" containerID="d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc" Jan 26 16:20:21 crc kubenswrapper[4680]: E0126 16:20:21.295439 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc\": container with ID starting with d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc not found: ID does not exist" containerID="d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.303098 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc"} err="failed to get container status \"d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc\": rpc error: code = NotFound desc = could not find container \"d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc\": container with ID starting with d6687eb713001b63b90a5f106ce3fb4685ecdb1176d238e086d40b0affcededc not found: ID does not exist" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.303153 4680 scope.go:117] "RemoveContainer" containerID="d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71" Jan 26 16:20:21 crc kubenswrapper[4680]: E0126 16:20:21.303571 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71\": container with ID starting with d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71 not found: ID does not exist" containerID="d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.303594 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71"} err="failed to get container status \"d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71\": rpc error: code = NotFound desc = could not find container \"d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71\": container with ID starting with d55856e47a7815d4eab835c7a5c6f65d1d33bc534543144519315259f6ae3f71 not found: ID does not exist" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.303608 4680 scope.go:117] "RemoveContainer" containerID="9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898" Jan 26 16:20:21 crc kubenswrapper[4680]: E0126 16:20:21.307133 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898\": container with ID starting with 9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898 not found: ID does not exist" containerID="9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.307161 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898"} err="failed to get container status \"9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898\": rpc error: code = NotFound desc = could not find container \"9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898\": container with ID starting with 9f5c234785ea15074a5913ec48cae02fd16d5d853de887a2363f8f78b8161898 not found: ID does not exist" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.342482 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4424f944-d765-4fe7-ad0a-48a438cc6fca" (UID: "4424f944-d765-4fe7-ad0a-48a438cc6fca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.366049 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.366116 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4424f944-d765-4fe7-ad0a-48a438cc6fca-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.366126 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fwxp\" (UniqueName: \"kubernetes.io/projected/4424f944-d765-4fe7-ad0a-48a438cc6fca-kube-api-access-2fwxp\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.518375 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.527116 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jt7s"] Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.537246 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jt7s"] Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.673020 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-catalog-content\") pod \"e0c4197b-8c13-4bd7-9069-cf93833ad305\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.673147 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-utilities\") pod \"e0c4197b-8c13-4bd7-9069-cf93833ad305\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.673197 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4h4v\" (UniqueName: \"kubernetes.io/projected/e0c4197b-8c13-4bd7-9069-cf93833ad305-kube-api-access-j4h4v\") pod \"e0c4197b-8c13-4bd7-9069-cf93833ad305\" (UID: \"e0c4197b-8c13-4bd7-9069-cf93833ad305\") " Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.674677 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-utilities" (OuterVolumeSpecName: "utilities") pod "e0c4197b-8c13-4bd7-9069-cf93833ad305" (UID: "e0c4197b-8c13-4bd7-9069-cf93833ad305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.676893 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c4197b-8c13-4bd7-9069-cf93833ad305-kube-api-access-j4h4v" (OuterVolumeSpecName: "kube-api-access-j4h4v") pod "e0c4197b-8c13-4bd7-9069-cf93833ad305" (UID: "e0c4197b-8c13-4bd7-9069-cf93833ad305"). InnerVolumeSpecName "kube-api-access-j4h4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.719343 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0c4197b-8c13-4bd7-9069-cf93833ad305" (UID: "e0c4197b-8c13-4bd7-9069-cf93833ad305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.774578 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.774633 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c4197b-8c13-4bd7-9069-cf93833ad305-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:21 crc kubenswrapper[4680]: I0126 16:20:21.774650 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4h4v\" (UniqueName: \"kubernetes.io/projected/e0c4197b-8c13-4bd7-9069-cf93833ad305-kube-api-access-j4h4v\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.195969 4680 generic.go:334] "Generic (PLEG): container finished" podID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerID="9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6" exitCode=0 Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.196019 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkvl7" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.196051 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerDied","Data":"9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6"} Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.196599 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkvl7" event={"ID":"e0c4197b-8c13-4bd7-9069-cf93833ad305","Type":"ContainerDied","Data":"60e44e551225b3be6bb464704904d7a6b89d4e84341c447a206507ffeeab8ad2"} Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.196623 4680 scope.go:117] "RemoveContainer" containerID="9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.215672 4680 scope.go:117] "RemoveContainer" containerID="ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.227617 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tkvl7"] Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.236629 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tkvl7"] Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.248353 4680 scope.go:117] "RemoveContainer" containerID="e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.264669 4680 scope.go:117] "RemoveContainer" containerID="9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6" Jan 26 16:20:22 crc kubenswrapper[4680]: E0126 16:20:22.265305 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6\": container with ID starting with 9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6 not found: ID does not exist" containerID="9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.265353 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6"} err="failed to get container status \"9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6\": rpc error: code = NotFound desc = could not find container \"9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6\": container with ID starting with 9b89e2365d6ad5177fd8066e2675eb15796a206f0aa6646e8510a981d73baff6 not found: ID does not exist" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.265385 4680 scope.go:117] "RemoveContainer" containerID="ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461" Jan 26 16:20:22 crc kubenswrapper[4680]: E0126 16:20:22.265734 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461\": container with ID starting with ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461 not found: ID does not exist" containerID="ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.265880 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461"} err="failed to get container status \"ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461\": rpc error: code = NotFound desc = could not find container \"ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461\": container with ID starting with ded85a0881143626bd03cfee77222d07b060f1ab2cbfd85ba69546c5ddc62461 not found: ID does not exist" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.266000 4680 scope.go:117] "RemoveContainer" containerID="e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d" Jan 26 16:20:22 crc kubenswrapper[4680]: E0126 16:20:22.266665 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d\": container with ID starting with e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d not found: ID does not exist" containerID="e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d" Jan 26 16:20:22 crc kubenswrapper[4680]: I0126 16:20:22.266705 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d"} err="failed to get container status \"e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d\": rpc error: code = NotFound desc = could not find container \"e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d\": container with ID starting with e06abf35f9161e240e48a5956644747b8fed625c1f9d2033bf227f72eb3d9f9d not found: ID does not exist" Jan 26 16:20:23 crc kubenswrapper[4680]: I0126 16:20:23.176785 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" path="/var/lib/kubelet/pods/4424f944-d765-4fe7-ad0a-48a438cc6fca/volumes" Jan 26 16:20:23 crc kubenswrapper[4680]: I0126 16:20:23.177412 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" path="/var/lib/kubelet/pods/e0c4197b-8c13-4bd7-9069-cf93833ad305/volumes" Jan 26 16:20:30 crc kubenswrapper[4680]: I0126 16:20:30.709348 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-97dr2" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.159992 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85"] Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160620 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="extract-utilities" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160637 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="extract-utilities" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160655 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160663 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160675 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="extract-utilities" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160683 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="extract-utilities" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160696 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="extract-content" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160704 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="extract-content" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160719 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="extract-content" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160727 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="extract-content" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160741 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160748 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160760 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160766 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160778 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="extract-utilities" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160785 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="extract-utilities" Jan 26 16:20:32 crc kubenswrapper[4680]: E0126 16:20:32.160798 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="extract-content" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160809 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="extract-content" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160943 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4424f944-d765-4fe7-ad0a-48a438cc6fca" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160954 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c4197b-8c13-4bd7-9069-cf93833ad305" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.160962 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66cea4d-261b-42bd-a600-f630e5ff07fb" containerName="registry-server" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.161758 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.164261 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hdkb4" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.171040 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85"] Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.320265 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhd4l\" (UniqueName: \"kubernetes.io/projected/c2798e64-6b7a-4560-ade7-f76c39ecaccb-kube-api-access-xhd4l\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.320429 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-bundle\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.320452 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-util\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.421599 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhd4l\" (UniqueName: \"kubernetes.io/projected/c2798e64-6b7a-4560-ade7-f76c39ecaccb-kube-api-access-xhd4l\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.421653 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-bundle\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.421675 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-util\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.422104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-bundle\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.422136 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-util\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.441436 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhd4l\" (UniqueName: \"kubernetes.io/projected/c2798e64-6b7a-4560-ade7-f76c39ecaccb-kube-api-access-xhd4l\") pod \"66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.487977 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:32 crc kubenswrapper[4680]: I0126 16:20:32.735840 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85"] Jan 26 16:20:33 crc kubenswrapper[4680]: I0126 16:20:33.272847 4680 generic.go:334] "Generic (PLEG): container finished" podID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerID="8ae371668c35a6ea98c86a1d38d490550e450aa68ccac8a76b0e6bab14cf1cd4" exitCode=0 Jan 26 16:20:33 crc kubenswrapper[4680]: I0126 16:20:33.272942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" event={"ID":"c2798e64-6b7a-4560-ade7-f76c39ecaccb","Type":"ContainerDied","Data":"8ae371668c35a6ea98c86a1d38d490550e450aa68ccac8a76b0e6bab14cf1cd4"} Jan 26 16:20:33 crc kubenswrapper[4680]: I0126 16:20:33.273139 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" event={"ID":"c2798e64-6b7a-4560-ade7-f76c39ecaccb","Type":"ContainerStarted","Data":"5d7a8f97a05dd814aa567bf13853d935c75ac88de48f57ff5174092882679825"} Jan 26 16:20:34 crc kubenswrapper[4680]: I0126 16:20:34.279335 4680 generic.go:334] "Generic (PLEG): container finished" podID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerID="c72c80d5f08522cfb24c2a727e8682768ebb6ddc21881d75f4bdb98261c10cdb" exitCode=0 Jan 26 16:20:34 crc kubenswrapper[4680]: I0126 16:20:34.279620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" event={"ID":"c2798e64-6b7a-4560-ade7-f76c39ecaccb","Type":"ContainerDied","Data":"c72c80d5f08522cfb24c2a727e8682768ebb6ddc21881d75f4bdb98261c10cdb"} Jan 26 16:20:35 crc kubenswrapper[4680]: I0126 16:20:35.287225 4680 generic.go:334] "Generic (PLEG): container finished" podID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerID="4aa226e9104a2363265adea8b6b851a26418a3e2c69e3c80007195291f528bdf" exitCode=0 Jan 26 16:20:35 crc kubenswrapper[4680]: I0126 16:20:35.287268 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" event={"ID":"c2798e64-6b7a-4560-ade7-f76c39ecaccb","Type":"ContainerDied","Data":"4aa226e9104a2363265adea8b6b851a26418a3e2c69e3c80007195291f528bdf"} Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.563091 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.673895 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-util\") pod \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.673977 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhd4l\" (UniqueName: \"kubernetes.io/projected/c2798e64-6b7a-4560-ade7-f76c39ecaccb-kube-api-access-xhd4l\") pod \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.674046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-bundle\") pod \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\" (UID: \"c2798e64-6b7a-4560-ade7-f76c39ecaccb\") " Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.675472 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-bundle" (OuterVolumeSpecName: "bundle") pod "c2798e64-6b7a-4560-ade7-f76c39ecaccb" (UID: "c2798e64-6b7a-4560-ade7-f76c39ecaccb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.679553 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2798e64-6b7a-4560-ade7-f76c39ecaccb-kube-api-access-xhd4l" (OuterVolumeSpecName: "kube-api-access-xhd4l") pod "c2798e64-6b7a-4560-ade7-f76c39ecaccb" (UID: "c2798e64-6b7a-4560-ade7-f76c39ecaccb"). InnerVolumeSpecName "kube-api-access-xhd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.688015 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-util" (OuterVolumeSpecName: "util") pod "c2798e64-6b7a-4560-ade7-f76c39ecaccb" (UID: "c2798e64-6b7a-4560-ade7-f76c39ecaccb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.775439 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhd4l\" (UniqueName: \"kubernetes.io/projected/c2798e64-6b7a-4560-ade7-f76c39ecaccb-kube-api-access-xhd4l\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.775637 4680 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:36 crc kubenswrapper[4680]: I0126 16:20:36.775727 4680 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2798e64-6b7a-4560-ade7-f76c39ecaccb-util\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:37 crc kubenswrapper[4680]: I0126 16:20:37.302351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" event={"ID":"c2798e64-6b7a-4560-ade7-f76c39ecaccb","Type":"ContainerDied","Data":"5d7a8f97a05dd814aa567bf13853d935c75ac88de48f57ff5174092882679825"} Jan 26 16:20:37 crc kubenswrapper[4680]: I0126 16:20:37.302732 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7a8f97a05dd814aa567bf13853d935c75ac88de48f57ff5174092882679825" Jan 26 16:20:37 crc kubenswrapper[4680]: I0126 16:20:37.302451 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/66922d8a38390dddb5bd3f73a26daecdf14a8a1bc71f9c6372ba2dacae2hs85" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.880522 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8"] Jan 26 16:20:39 crc kubenswrapper[4680]: E0126 16:20:39.880818 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="pull" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.880835 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="pull" Jan 26 16:20:39 crc kubenswrapper[4680]: E0126 16:20:39.880859 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="extract" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.880867 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="extract" Jan 26 16:20:39 crc kubenswrapper[4680]: E0126 16:20:39.880879 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="util" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.880886 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="util" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.881114 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2798e64-6b7a-4560-ade7-f76c39ecaccb" containerName="extract" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.881619 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.883777 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qfmtc" Jan 26 16:20:39 crc kubenswrapper[4680]: I0126 16:20:39.904349 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8"] Jan 26 16:20:40 crc kubenswrapper[4680]: I0126 16:20:40.019535 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2r4l\" (UniqueName: \"kubernetes.io/projected/0d273daa-c1c4-4746-9e28-abf5e15aa387-kube-api-access-l2r4l\") pod \"openstack-operator-controller-init-5847574fb9-wfhs8\" (UID: \"0d273daa-c1c4-4746-9e28-abf5e15aa387\") " pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:20:40 crc kubenswrapper[4680]: I0126 16:20:40.120512 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2r4l\" (UniqueName: \"kubernetes.io/projected/0d273daa-c1c4-4746-9e28-abf5e15aa387-kube-api-access-l2r4l\") pod \"openstack-operator-controller-init-5847574fb9-wfhs8\" (UID: \"0d273daa-c1c4-4746-9e28-abf5e15aa387\") " pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:20:40 crc kubenswrapper[4680]: I0126 16:20:40.139388 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2r4l\" (UniqueName: \"kubernetes.io/projected/0d273daa-c1c4-4746-9e28-abf5e15aa387-kube-api-access-l2r4l\") pod \"openstack-operator-controller-init-5847574fb9-wfhs8\" (UID: \"0d273daa-c1c4-4746-9e28-abf5e15aa387\") " pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:20:40 crc kubenswrapper[4680]: I0126 16:20:40.198864 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:20:40 crc kubenswrapper[4680]: I0126 16:20:40.616147 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8"] Jan 26 16:20:41 crc kubenswrapper[4680]: I0126 16:20:41.330257 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" event={"ID":"0d273daa-c1c4-4746-9e28-abf5e15aa387","Type":"ContainerStarted","Data":"1ae256e4bd4ae75d87a32686dc3fd18dd8b9c5759d238881446d711aaedb3e14"} Jan 26 16:20:45 crc kubenswrapper[4680]: I0126 16:20:45.364713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" event={"ID":"0d273daa-c1c4-4746-9e28-abf5e15aa387","Type":"ContainerStarted","Data":"0098e59506ea938987bd11506235ed44247bdb8905d5a45b03a7916e9d68e496"} Jan 26 16:20:45 crc kubenswrapper[4680]: I0126 16:20:45.365325 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:20:45 crc kubenswrapper[4680]: I0126 16:20:45.395022 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" podStartSLOduration=2.121441588 podStartE2EDuration="6.395003356s" podCreationTimestamp="2026-01-26 16:20:39 +0000 UTC" firstStartedPulling="2026-01-26 16:20:40.624077062 +0000 UTC m=+915.785349341" lastFinishedPulling="2026-01-26 16:20:44.89763884 +0000 UTC m=+920.058911109" observedRunningTime="2026-01-26 16:20:45.38950166 +0000 UTC m=+920.550773939" watchObservedRunningTime="2026-01-26 16:20:45.395003356 +0000 UTC m=+920.556275615" Jan 26 16:20:50 crc kubenswrapper[4680]: I0126 16:20:50.202144 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.549532 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.551023 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.554881 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-r68rz" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.557796 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.558706 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.564184 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-65rcn" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.570195 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.580344 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.581239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2cc\" (UniqueName: \"kubernetes.io/projected/e9eb4184-e77b-49c1-b4af-cae5dc77b953-kube-api-access-qj2cc\") pod \"barbican-operator-controller-manager-7f86f8796f-lhqbm\" (UID: \"e9eb4184-e77b-49c1-b4af-cae5dc77b953\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.581271 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.581334 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/7619b024-3fab-49a5-abec-5b31e09a5c51-kube-api-access-r79f8\") pod \"cinder-operator-controller-manager-7478f7dbf9-vpmf7\" (UID: \"7619b024-3fab-49a5-abec-5b31e09a5c51\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.583477 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kg6js" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.593751 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.615985 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.666866 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.667577 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.670574 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wrj56" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.672957 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.682422 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbdh\" (UniqueName: \"kubernetes.io/projected/58579a35-1ab3-4610-9d38-66824866b438-kube-api-access-chbdh\") pod \"glance-operator-controller-manager-78fdd796fd-t4zl4\" (UID: \"58579a35-1ab3-4610-9d38-66824866b438\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.682479 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvs6\" (UniqueName: \"kubernetes.io/projected/c4788302-e01e-485b-b716-a6db7a2ac272-kube-api-access-bkvs6\") pod \"designate-operator-controller-manager-b45d7bf98-xmpgf\" (UID: \"c4788302-e01e-485b-b716-a6db7a2ac272\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.682536 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2cc\" (UniqueName: \"kubernetes.io/projected/e9eb4184-e77b-49c1-b4af-cae5dc77b953-kube-api-access-qj2cc\") pod \"barbican-operator-controller-manager-7f86f8796f-lhqbm\" (UID: \"e9eb4184-e77b-49c1-b4af-cae5dc77b953\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.682592 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/7619b024-3fab-49a5-abec-5b31e09a5c51-kube-api-access-r79f8\") pod \"cinder-operator-controller-manager-7478f7dbf9-vpmf7\" (UID: \"7619b024-3fab-49a5-abec-5b31e09a5c51\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.707995 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.708865 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.711880 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g5csp" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.726926 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/7619b024-3fab-49a5-abec-5b31e09a5c51-kube-api-access-r79f8\") pod \"cinder-operator-controller-manager-7478f7dbf9-vpmf7\" (UID: \"7619b024-3fab-49a5-abec-5b31e09a5c51\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.735382 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.759135 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2cc\" (UniqueName: \"kubernetes.io/projected/e9eb4184-e77b-49c1-b4af-cae5dc77b953-kube-api-access-qj2cc\") pod \"barbican-operator-controller-manager-7f86f8796f-lhqbm\" (UID: \"e9eb4184-e77b-49c1-b4af-cae5dc77b953\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.776305 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.783395 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.787596 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvs6\" (UniqueName: \"kubernetes.io/projected/c4788302-e01e-485b-b716-a6db7a2ac272-kube-api-access-bkvs6\") pod \"designate-operator-controller-manager-b45d7bf98-xmpgf\" (UID: \"c4788302-e01e-485b-b716-a6db7a2ac272\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.788593 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p6njx" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.793906 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chbdh\" (UniqueName: \"kubernetes.io/projected/58579a35-1ab3-4610-9d38-66824866b438-kube-api-access-chbdh\") pod \"glance-operator-controller-manager-78fdd796fd-t4zl4\" (UID: \"58579a35-1ab3-4610-9d38-66824866b438\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.838311 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.839221 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.841875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvs6\" (UniqueName: \"kubernetes.io/projected/c4788302-e01e-485b-b716-a6db7a2ac272-kube-api-access-bkvs6\") pod \"designate-operator-controller-manager-b45d7bf98-xmpgf\" (UID: \"c4788302-e01e-485b-b716-a6db7a2ac272\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.843448 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.843749 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-bhsf6" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.860293 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.871336 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chbdh\" (UniqueName: \"kubernetes.io/projected/58579a35-1ab3-4610-9d38-66824866b438-kube-api-access-chbdh\") pod \"glance-operator-controller-manager-78fdd796fd-t4zl4\" (UID: \"58579a35-1ab3-4610-9d38-66824866b438\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.874322 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.874602 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.888336 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.900030 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2gt\" (UniqueName: \"kubernetes.io/projected/c9bc9b0e-b690-47c1-92ea-bea335fc0b41-kube-api-access-xc2gt\") pod \"horizon-operator-controller-manager-77d5c5b54f-mmcpt\" (UID: \"c9bc9b0e-b690-47c1-92ea-bea335fc0b41\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.900123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.900179 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszpf\" (UniqueName: \"kubernetes.io/projected/51e3cde0-6a23-4d62-83ca-fc16415da2bb-kube-api-access-nszpf\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.900201 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc6k9\" (UniqueName: \"kubernetes.io/projected/27eb5e1a-3047-4e87-9ad1-f948e11dfe25-kube-api-access-qc6k9\") pod \"heat-operator-controller-manager-594c8c9d5d-9nr5c\" (UID: \"27eb5e1a-3047-4e87-9ad1-f948e11dfe25\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.910720 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.926135 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.927002 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.948485 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gg2t7" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.950138 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.965892 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.966894 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.969519 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-48jb2" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.989513 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.991100 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz"] Jan 26 16:21:26 crc kubenswrapper[4680]: I0126 16:21:26.999402 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.000906 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszpf\" (UniqueName: \"kubernetes.io/projected/51e3cde0-6a23-4d62-83ca-fc16415da2bb-kube-api-access-nszpf\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.000951 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc6k9\" (UniqueName: \"kubernetes.io/projected/27eb5e1a-3047-4e87-9ad1-f948e11dfe25-kube-api-access-qc6k9\") pod \"heat-operator-controller-manager-594c8c9d5d-9nr5c\" (UID: \"27eb5e1a-3047-4e87-9ad1-f948e11dfe25\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.000984 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbjb\" (UniqueName: \"kubernetes.io/projected/bba916e9-436b-4c01-ba4c-2f758ed6d988-kube-api-access-vwbjb\") pod \"keystone-operator-controller-manager-b8b6d4659-8llvz\" (UID: \"bba916e9-436b-4c01-ba4c-2f758ed6d988\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.001024 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbwv\" (UniqueName: \"kubernetes.io/projected/bda51148-a37f-4767-8970-51ea56ecdfc7-kube-api-access-5jbwv\") pod \"ironic-operator-controller-manager-598f7747c9-sl65h\" (UID: \"bda51148-a37f-4767-8970-51ea56ecdfc7\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.001048 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2gt\" (UniqueName: \"kubernetes.io/projected/c9bc9b0e-b690-47c1-92ea-bea335fc0b41-kube-api-access-xc2gt\") pod \"horizon-operator-controller-manager-77d5c5b54f-mmcpt\" (UID: \"c9bc9b0e-b690-47c1-92ea-bea335fc0b41\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.001156 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:27 crc kubenswrapper[4680]: E0126 16:21:27.001270 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:27 crc kubenswrapper[4680]: E0126 16:21:27.001319 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert podName:51e3cde0-6a23-4d62-83ca-fc16415da2bb nodeName:}" failed. No retries permitted until 2026-01-26 16:21:27.501302587 +0000 UTC m=+962.662574856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert") pod "infra-operator-controller-manager-694cf4f878-vxhhf" (UID: "51e3cde0-6a23-4d62-83ca-fc16415da2bb") : secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.001779 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.013392 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zxjb7" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.020162 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.021042 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.029754 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.031173 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-77j96" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.047128 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.060401 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2gt\" (UniqueName: \"kubernetes.io/projected/c9bc9b0e-b690-47c1-92ea-bea335fc0b41-kube-api-access-xc2gt\") pod \"horizon-operator-controller-manager-77d5c5b54f-mmcpt\" (UID: \"c9bc9b0e-b690-47c1-92ea-bea335fc0b41\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.061480 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc6k9\" (UniqueName: \"kubernetes.io/projected/27eb5e1a-3047-4e87-9ad1-f948e11dfe25-kube-api-access-qc6k9\") pod \"heat-operator-controller-manager-594c8c9d5d-9nr5c\" (UID: \"27eb5e1a-3047-4e87-9ad1-f948e11dfe25\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.064642 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszpf\" (UniqueName: \"kubernetes.io/projected/51e3cde0-6a23-4d62-83ca-fc16415da2bb-kube-api-access-nszpf\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.096856 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.101777 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbjb\" (UniqueName: \"kubernetes.io/projected/bba916e9-436b-4c01-ba4c-2f758ed6d988-kube-api-access-vwbjb\") pod \"keystone-operator-controller-manager-b8b6d4659-8llvz\" (UID: \"bba916e9-436b-4c01-ba4c-2f758ed6d988\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.101843 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jbwv\" (UniqueName: \"kubernetes.io/projected/bda51148-a37f-4767-8970-51ea56ecdfc7-kube-api-access-5jbwv\") pod \"ironic-operator-controller-manager-598f7747c9-sl65h\" (UID: \"bda51148-a37f-4767-8970-51ea56ecdfc7\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.101909 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nb5k\" (UniqueName: \"kubernetes.io/projected/5c81deb3-0ad3-4ec0-91af-837aee09d577-kube-api-access-7nb5k\") pod \"manila-operator-controller-manager-78c6999f6f-789vw\" (UID: \"5c81deb3-0ad3-4ec0-91af-837aee09d577\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.101951 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhbn\" (UniqueName: \"kubernetes.io/projected/8bd876e3-9283-4de7-80b0-3c1787745bfb-kube-api-access-dkhbn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8\" (UID: \"8bd876e3-9283-4de7-80b0-3c1787745bfb\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.103506 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.104374 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.110732 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-28lxr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.128816 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.150096 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.150808 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.163166 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gkfnt" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.183270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbjb\" (UniqueName: \"kubernetes.io/projected/bba916e9-436b-4c01-ba4c-2f758ed6d988-kube-api-access-vwbjb\") pod \"keystone-operator-controller-manager-b8b6d4659-8llvz\" (UID: \"bba916e9-436b-4c01-ba4c-2f758ed6d988\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.198641 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jbwv\" (UniqueName: \"kubernetes.io/projected/bda51148-a37f-4767-8970-51ea56ecdfc7-kube-api-access-5jbwv\") pod \"ironic-operator-controller-manager-598f7747c9-sl65h\" (UID: \"bda51148-a37f-4767-8970-51ea56ecdfc7\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.209096 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nb5k\" (UniqueName: \"kubernetes.io/projected/5c81deb3-0ad3-4ec0-91af-837aee09d577-kube-api-access-7nb5k\") pod \"manila-operator-controller-manager-78c6999f6f-789vw\" (UID: \"5c81deb3-0ad3-4ec0-91af-837aee09d577\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.209160 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6g2\" (UniqueName: \"kubernetes.io/projected/4ef5b147-3e74-4417-9c89-e0f33fc62eba-kube-api-access-mr6g2\") pod \"neutron-operator-controller-manager-78d58447c5-pjtw6\" (UID: \"4ef5b147-3e74-4417-9c89-e0f33fc62eba\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.209305 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhbn\" (UniqueName: \"kubernetes.io/projected/8bd876e3-9283-4de7-80b0-3c1787745bfb-kube-api-access-dkhbn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8\" (UID: \"8bd876e3-9283-4de7-80b0-3c1787745bfb\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.209422 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nld4\" (UniqueName: \"kubernetes.io/projected/f79a7334-75ae-40a1-81c3-ce27e0567de9-kube-api-access-2nld4\") pod \"nova-operator-controller-manager-7bdb645866-5gkjr\" (UID: \"f79a7334-75ae-40a1-81c3-ce27e0567de9\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.211615 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.211655 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.270759 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nb5k\" (UniqueName: \"kubernetes.io/projected/5c81deb3-0ad3-4ec0-91af-837aee09d577-kube-api-access-7nb5k\") pod \"manila-operator-controller-manager-78c6999f6f-789vw\" (UID: \"5c81deb3-0ad3-4ec0-91af-837aee09d577\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.282856 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.309099 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.310293 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6g2\" (UniqueName: \"kubernetes.io/projected/4ef5b147-3e74-4417-9c89-e0f33fc62eba-kube-api-access-mr6g2\") pod \"neutron-operator-controller-manager-78d58447c5-pjtw6\" (UID: \"4ef5b147-3e74-4417-9c89-e0f33fc62eba\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.310408 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nld4\" (UniqueName: \"kubernetes.io/projected/f79a7334-75ae-40a1-81c3-ce27e0567de9-kube-api-access-2nld4\") pod \"nova-operator-controller-manager-7bdb645866-5gkjr\" (UID: \"f79a7334-75ae-40a1-81c3-ce27e0567de9\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.343903 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhbn\" (UniqueName: \"kubernetes.io/projected/8bd876e3-9283-4de7-80b0-3c1787745bfb-kube-api-access-dkhbn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8\" (UID: \"8bd876e3-9283-4de7-80b0-3c1787745bfb\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.344460 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.350670 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nld4\" (UniqueName: \"kubernetes.io/projected/f79a7334-75ae-40a1-81c3-ce27e0567de9-kube-api-access-2nld4\") pod \"nova-operator-controller-manager-7bdb645866-5gkjr\" (UID: \"f79a7334-75ae-40a1-81c3-ce27e0567de9\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.354241 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.355004 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.361665 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p9dpr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.361729 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6g2\" (UniqueName: \"kubernetes.io/projected/4ef5b147-3e74-4417-9c89-e0f33fc62eba-kube-api-access-mr6g2\") pod \"neutron-operator-controller-manager-78d58447c5-pjtw6\" (UID: \"4ef5b147-3e74-4417-9c89-e0f33fc62eba\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.363716 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.364512 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.369037 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.374605 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zgfmt" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.397724 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.400146 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.400949 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.409480 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dmbs4" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.413692 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.439209 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xmxp\" (UniqueName: \"kubernetes.io/projected/cb36e4de-bd33-4daf-83f5-1ced8ce56c90-kube-api-access-5xmxp\") pod \"octavia-operator-controller-manager-5f4cd88d46-qjcpj\" (UID: \"cb36e4de-bd33-4daf-83f5-1ced8ce56c90\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.439345 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxp7\" (UniqueName: \"kubernetes.io/projected/5140d771-5948-4407-b1d9-aa1aa80415a6-kube-api-access-tlxp7\") pod \"ovn-operator-controller-manager-6f75f45d54-mksvz\" (UID: \"5140d771-5948-4407-b1d9-aa1aa80415a6\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.439393 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwnh\" (UniqueName: \"kubernetes.io/projected/db8c5f93-fbaf-4f34-9214-ec7e463beb79-kube-api-access-hrwnh\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.439433 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.452332 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.458417 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.463459 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.463794 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.481309 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q629r" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.498258 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.504967 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.565544 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.587767 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.588029 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjf5g\" (UniqueName: \"kubernetes.io/projected/bb8599b0-8155-440a-a0f5-505f73113a1c-kube-api-access-qjf5g\") pod \"placement-operator-controller-manager-79d5ccc684-xjmwr\" (UID: \"bb8599b0-8155-440a-a0f5-505f73113a1c\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.588244 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlxp7\" (UniqueName: \"kubernetes.io/projected/5140d771-5948-4407-b1d9-aa1aa80415a6-kube-api-access-tlxp7\") pod \"ovn-operator-controller-manager-6f75f45d54-mksvz\" (UID: \"5140d771-5948-4407-b1d9-aa1aa80415a6\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.588338 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwnh\" (UniqueName: \"kubernetes.io/projected/db8c5f93-fbaf-4f34-9214-ec7e463beb79-kube-api-access-hrwnh\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.588433 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.588765 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xmxp\" (UniqueName: \"kubernetes.io/projected/cb36e4de-bd33-4daf-83f5-1ced8ce56c90-kube-api-access-5xmxp\") pod \"octavia-operator-controller-manager-5f4cd88d46-qjcpj\" (UID: \"cb36e4de-bd33-4daf-83f5-1ced8ce56c90\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:21:27 crc kubenswrapper[4680]: E0126 16:21:27.590251 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:27 crc kubenswrapper[4680]: E0126 16:21:27.590314 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert podName:51e3cde0-6a23-4d62-83ca-fc16415da2bb nodeName:}" failed. No retries permitted until 2026-01-26 16:21:28.590291525 +0000 UTC m=+963.751563794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert") pod "infra-operator-controller-manager-694cf4f878-vxhhf" (UID: "51e3cde0-6a23-4d62-83ca-fc16415da2bb") : secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:27 crc kubenswrapper[4680]: E0126 16:21:27.590970 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:27 crc kubenswrapper[4680]: E0126 16:21:27.591052 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert podName:db8c5f93-fbaf-4f34-9214-ec7e463beb79 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:28.091032326 +0000 UTC m=+963.252304595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" (UID: "db8c5f93-fbaf-4f34-9214-ec7e463beb79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.655445 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.656859 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.668079 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xmxp\" (UniqueName: \"kubernetes.io/projected/cb36e4de-bd33-4daf-83f5-1ced8ce56c90-kube-api-access-5xmxp\") pod \"octavia-operator-controller-manager-5f4cd88d46-qjcpj\" (UID: \"cb36e4de-bd33-4daf-83f5-1ced8ce56c90\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.668914 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-vx2br" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.683301 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwnh\" (UniqueName: \"kubernetes.io/projected/db8c5f93-fbaf-4f34-9214-ec7e463beb79-kube-api-access-hrwnh\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.683754 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlxp7\" (UniqueName: \"kubernetes.io/projected/5140d771-5948-4407-b1d9-aa1aa80415a6-kube-api-access-tlxp7\") pod \"ovn-operator-controller-manager-6f75f45d54-mksvz\" (UID: \"5140d771-5948-4407-b1d9-aa1aa80415a6\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.693966 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqdb\" (UniqueName: \"kubernetes.io/projected/505c1441-c509-4792-ac15-8b218143a69f-kube-api-access-kbqdb\") pod \"swift-operator-controller-manager-547cbdb99f-xbcl9\" (UID: \"505c1441-c509-4792-ac15-8b218143a69f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.694641 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjf5g\" (UniqueName: \"kubernetes.io/projected/bb8599b0-8155-440a-a0f5-505f73113a1c-kube-api-access-qjf5g\") pod \"placement-operator-controller-manager-79d5ccc684-xjmwr\" (UID: \"bb8599b0-8155-440a-a0f5-505f73113a1c\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.697114 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.707684 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.747959 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.748745 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.751015 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6z76x" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.767763 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjf5g\" (UniqueName: \"kubernetes.io/projected/bb8599b0-8155-440a-a0f5-505f73113a1c-kube-api-access-qjf5g\") pod \"placement-operator-controller-manager-79d5ccc684-xjmwr\" (UID: \"bb8599b0-8155-440a-a0f5-505f73113a1c\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.768534 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.775359 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.785063 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.785965 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.788774 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cld9t" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.795887 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967dw\" (UniqueName: \"kubernetes.io/projected/19dbdff9-08dd-449c-8794-20b497c7119d-kube-api-access-967dw\") pod \"telemetry-operator-controller-manager-85cd9769bb-fthk4\" (UID: \"19dbdff9-08dd-449c-8794-20b497c7119d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.795944 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbqdb\" (UniqueName: \"kubernetes.io/projected/505c1441-c509-4792-ac15-8b218143a69f-kube-api-access-kbqdb\") pod \"swift-operator-controller-manager-547cbdb99f-xbcl9\" (UID: \"505c1441-c509-4792-ac15-8b218143a69f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.795964 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47n6l\" (UniqueName: \"kubernetes.io/projected/5d25f9f6-553d-477c-82f7-a25f017cb21a-kube-api-access-47n6l\") pod \"test-operator-controller-manager-69797bbcbd-chmcm\" (UID: \"5d25f9f6-553d-477c-82f7-a25f017cb21a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.810932 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.817616 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-58jzp"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.818853 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.819236 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbqdb\" (UniqueName: \"kubernetes.io/projected/505c1441-c509-4792-ac15-8b218143a69f-kube-api-access-kbqdb\") pod \"swift-operator-controller-manager-547cbdb99f-xbcl9\" (UID: \"505c1441-c509-4792-ac15-8b218143a69f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.825056 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f4mkl" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.836086 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-58jzp"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.861633 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.862643 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.869250 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.869427 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fvp9h" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.869532 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.869626 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.896849 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-967dw\" (UniqueName: \"kubernetes.io/projected/19dbdff9-08dd-449c-8794-20b497c7119d-kube-api-access-967dw\") pod \"telemetry-operator-controller-manager-85cd9769bb-fthk4\" (UID: \"19dbdff9-08dd-449c-8794-20b497c7119d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.896908 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47n6l\" (UniqueName: \"kubernetes.io/projected/5d25f9f6-553d-477c-82f7-a25f017cb21a-kube-api-access-47n6l\") pod \"test-operator-controller-manager-69797bbcbd-chmcm\" (UID: \"5d25f9f6-553d-477c-82f7-a25f017cb21a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.903126 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.904254 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.904837 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.906097 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-lxkv7" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.928771 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz"] Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.967138 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47n6l\" (UniqueName: \"kubernetes.io/projected/5d25f9f6-553d-477c-82f7-a25f017cb21a-kube-api-access-47n6l\") pod \"test-operator-controller-manager-69797bbcbd-chmcm\" (UID: \"5d25f9f6-553d-477c-82f7-a25f017cb21a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.968566 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-967dw\" (UniqueName: \"kubernetes.io/projected/19dbdff9-08dd-449c-8794-20b497c7119d-kube-api-access-967dw\") pod \"telemetry-operator-controller-manager-85cd9769bb-fthk4\" (UID: \"19dbdff9-08dd-449c-8794-20b497c7119d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.998778 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm4f\" (UniqueName: \"kubernetes.io/projected/923528ea-e48b-477c-aa11-6912e8167448-kube-api-access-hhm4f\") pod \"watcher-operator-controller-manager-564965969-58jzp\" (UID: \"923528ea-e48b-477c-aa11-6912e8167448\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.998836 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.998865 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99f6\" (UniqueName: \"kubernetes.io/projected/0ba4109b-0e34-4c97-884a-d70052bf8082-kube-api-access-w99f6\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:27 crc kubenswrapper[4680]: I0126 16:21:27.998920 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.023938 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.097483 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.100209 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5ff\" (UniqueName: \"kubernetes.io/projected/db3000c1-08a3-4607-8e3f-143b4acc639f-kube-api-access-ps5ff\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4tmgz\" (UID: \"db3000c1-08a3-4607-8e3f-143b4acc639f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.100257 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.100323 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.100364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm4f\" (UniqueName: \"kubernetes.io/projected/923528ea-e48b-477c-aa11-6912e8167448-kube-api-access-hhm4f\") pod \"watcher-operator-controller-manager-564965969-58jzp\" (UID: \"923528ea-e48b-477c-aa11-6912e8167448\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.100389 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.100407 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w99f6\" (UniqueName: \"kubernetes.io/projected/0ba4109b-0e34-4c97-884a-d70052bf8082-kube-api-access-w99f6\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.101112 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.101256 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert podName:db8c5f93-fbaf-4f34-9214-ec7e463beb79 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:29.101238285 +0000 UTC m=+964.262510554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" (UID: "db8c5f93-fbaf-4f34-9214-ec7e463beb79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.101720 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.101817 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:28.601806721 +0000 UTC m=+963.763078990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "metrics-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.106010 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.106358 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:28.606339379 +0000 UTC m=+963.767611648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.111941 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm"] Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.119559 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf"] Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.176211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm4f\" (UniqueName: \"kubernetes.io/projected/923528ea-e48b-477c-aa11-6912e8167448-kube-api-access-hhm4f\") pod \"watcher-operator-controller-manager-564965969-58jzp\" (UID: \"923528ea-e48b-477c-aa11-6912e8167448\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.178774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w99f6\" (UniqueName: \"kubernetes.io/projected/0ba4109b-0e34-4c97-884a-d70052bf8082-kube-api-access-w99f6\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.204803 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5ff\" (UniqueName: \"kubernetes.io/projected/db3000c1-08a3-4607-8e3f-143b4acc639f-kube-api-access-ps5ff\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4tmgz\" (UID: \"db3000c1-08a3-4607-8e3f-143b4acc639f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.261588 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.313007 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5ff\" (UniqueName: \"kubernetes.io/projected/db3000c1-08a3-4607-8e3f-143b4acc639f-kube-api-access-ps5ff\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4tmgz\" (UID: \"db3000c1-08a3-4607-8e3f-143b4acc639f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.368494 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.459448 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.613803 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.613862 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.613890 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.614025 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.614088 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:29.614059277 +0000 UTC m=+964.775331546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "metrics-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.614301 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.614390 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert podName:51e3cde0-6a23-4d62-83ca-fc16415da2bb nodeName:}" failed. No retries permitted until 2026-01-26 16:21:30.614358696 +0000 UTC m=+965.775631035 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert") pod "infra-operator-controller-manager-694cf4f878-vxhhf" (UID: "51e3cde0-6a23-4d62-83ca-fc16415da2bb") : secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.614441 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: E0126 16:21:28.614466 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:29.614458799 +0000 UTC m=+964.775731068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "webhook-server-cert" not found Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.676617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" event={"ID":"e9eb4184-e77b-49c1-b4af-cae5dc77b953","Type":"ContainerStarted","Data":"890450d4a60b5a8d53637c192448b2324f454041dcb5069a85165501c5b12d48"} Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.691296 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" event={"ID":"c4788302-e01e-485b-b716-a6db7a2ac272","Type":"ContainerStarted","Data":"7564842e427e29d06921cfff62f3523ed417b88ad634746f446ee9177ce11d30"} Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.792474 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7"] Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.810110 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4"] Jan 26 16:21:28 crc kubenswrapper[4680]: I0126 16:21:28.882261 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.130499 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.130649 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.130695 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert podName:db8c5f93-fbaf-4f34-9214-ec7e463beb79 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:31.130681437 +0000 UTC m=+966.291953706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" (UID: "db8c5f93-fbaf-4f34-9214-ec7e463beb79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.155659 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8"] Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.184915 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bd876e3_9283_4de7_80b0_3c1787745bfb.slice/crio-36cbca347769ef17d1f1d1d4608389040c449fafe1b208de8dd58eaf33ad0426 WatchSource:0}: Error finding container 36cbca347769ef17d1f1d1d4608389040c449fafe1b208de8dd58eaf33ad0426: Status 404 returned error can't find the container with id 36cbca347769ef17d1f1d1d4608389040c449fafe1b208de8dd58eaf33ad0426 Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.205699 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.223615 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.225574 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.253386 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt"] Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.261550 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf79a7334_75ae_40a1_81c3_ce27e0567de9.slice/crio-a1861fa9e3dbc05c39ce8dda48d451b9a269ec087c4035c707fcafaf9ee06c89 WatchSource:0}: Error finding container a1861fa9e3dbc05c39ce8dda48d451b9a269ec087c4035c707fcafaf9ee06c89: Status 404 returned error can't find the container with id a1861fa9e3dbc05c39ce8dda48d451b9a269ec087c4035c707fcafaf9ee06c89 Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.265015 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.544890 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.557961 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.562600 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr"] Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.575676 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb8599b0_8155_440a_a0f5_505f73113a1c.slice/crio-38a9f41e78f8b05889c157b8a4fb29792c4fa982e1753a993082282c808aea42 WatchSource:0}: Error finding container 38a9f41e78f8b05889c157b8a4fb29792c4fa982e1753a993082282c808aea42: Status 404 returned error can't find the container with id 38a9f41e78f8b05889c157b8a4fb29792c4fa982e1753a993082282c808aea42 Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.591613 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505c1441_c509_4792_ac15_8b218143a69f.slice/crio-db580e185f4a2b0416a0eecd0c60bb291713345d5975cbbbaf3b13f044752e3f WatchSource:0}: Error finding container db580e185f4a2b0416a0eecd0c60bb291713345d5975cbbbaf3b13f044752e3f: Status 404 returned error can't find the container with id db580e185f4a2b0416a0eecd0c60bb291713345d5975cbbbaf3b13f044752e3f Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.603811 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.611614 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.623954 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.634829 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-58jzp"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.651194 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.651266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.651402 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.651451 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:31.651433474 +0000 UTC m=+966.812705743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "metrics-server-cert" not found Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.651500 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.651531 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:31.651523257 +0000 UTC m=+966.812795526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "webhook-server-cert" not found Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.652539 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw"] Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.659508 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4"] Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.671770 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d25f9f6_553d_477c_82f7_a25f017cb21a.slice/crio-70cf871fc1e32d36a8577aaf8bdc05f316595ef7fbf62f64cd53c2f73d67428f WatchSource:0}: Error finding container 70cf871fc1e32d36a8577aaf8bdc05f316595ef7fbf62f64cd53c2f73d67428f: Status 404 returned error can't find the container with id 70cf871fc1e32d36a8577aaf8bdc05f316595ef7fbf62f64cd53c2f73d67428f Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.674322 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47n6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-chmcm_openstack-operators(5d25f9f6-553d-477c-82f7-a25f017cb21a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.675682 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.680899 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod923528ea_e48b_477c_aa11_6912e8167448.slice/crio-8cb98fc737141c6abe8152b17c3c6e816c0f81a868fc1855c56009f2ee047cc9 WatchSource:0}: Error finding container 8cb98fc737141c6abe8152b17c3c6e816c0f81a868fc1855c56009f2ee047cc9: Status 404 returned error can't find the container with id 8cb98fc737141c6abe8152b17c3c6e816c0f81a868fc1855c56009f2ee047cc9 Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.681840 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19dbdff9_08dd_449c_8794_20b497c7119d.slice/crio-1adff703844ecdb7775b500ab55f5c21413147c5fc25156518a19871b9a40966 WatchSource:0}: Error finding container 1adff703844ecdb7775b500ab55f5c21413147c5fc25156518a19871b9a40966: Status 404 returned error can't find the container with id 1adff703844ecdb7775b500ab55f5c21413147c5fc25156518a19871b9a40966 Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.687614 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-967dw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-fthk4_openstack-operators(19dbdff9-08dd-449c-8794-20b497c7119d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.687642 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhm4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-58jzp_openstack-operators(923528ea-e48b-477c-aa11-6912e8167448): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.688786 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.688960 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" Jan 26 16:21:29 crc kubenswrapper[4680]: W0126 16:21:29.689402 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5140d771_5948_4407_b1d9_aa1aa80415a6.slice/crio-8059cc8216706fbd54ca4c4d178e1de4e5642cdf341f22575d19963d1f131830 WatchSource:0}: Error finding container 8059cc8216706fbd54ca4c4d178e1de4e5642cdf341f22575d19963d1f131830: Status 404 returned error can't find the container with id 8059cc8216706fbd54ca4c4d178e1de4e5642cdf341f22575d19963d1f131830 Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.692799 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlxp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-mksvz_openstack-operators(5140d771-5948-4407-b1d9-aa1aa80415a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.696469 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.713876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" event={"ID":"bda51148-a37f-4767-8970-51ea56ecdfc7","Type":"ContainerStarted","Data":"ad37670375d3d130805b354ba8392bcb36aea255e29a4ea6e42053225c03c7bd"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.717865 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" event={"ID":"27eb5e1a-3047-4e87-9ad1-f948e11dfe25","Type":"ContainerStarted","Data":"72f8f656770c192509f6ebf4174a142453de1a9a10bcee80288cf18422ae5af9"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.724668 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" event={"ID":"f79a7334-75ae-40a1-81c3-ce27e0567de9","Type":"ContainerStarted","Data":"a1861fa9e3dbc05c39ce8dda48d451b9a269ec087c4035c707fcafaf9ee06c89"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.745606 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" event={"ID":"923528ea-e48b-477c-aa11-6912e8167448","Type":"ContainerStarted","Data":"8cb98fc737141c6abe8152b17c3c6e816c0f81a868fc1855c56009f2ee047cc9"} Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.748203 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.754729 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" event={"ID":"5c81deb3-0ad3-4ec0-91af-837aee09d577","Type":"ContainerStarted","Data":"b79adcef8688a28d88f3d876803ace51f8769993cb0331449228ffecabb3f487"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.760235 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" event={"ID":"bb8599b0-8155-440a-a0f5-505f73113a1c","Type":"ContainerStarted","Data":"38a9f41e78f8b05889c157b8a4fb29792c4fa982e1753a993082282c808aea42"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.765849 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" event={"ID":"505c1441-c509-4792-ac15-8b218143a69f","Type":"ContainerStarted","Data":"db580e185f4a2b0416a0eecd0c60bb291713345d5975cbbbaf3b13f044752e3f"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.770179 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" event={"ID":"19dbdff9-08dd-449c-8794-20b497c7119d","Type":"ContainerStarted","Data":"1adff703844ecdb7775b500ab55f5c21413147c5fc25156518a19871b9a40966"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.775657 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" event={"ID":"4ef5b147-3e74-4417-9c89-e0f33fc62eba","Type":"ContainerStarted","Data":"0ff35ec887bbf0cce74f86d92055bf3e1803bdec143365bd99561d28d443352e"} Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.776957 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.777377 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" event={"ID":"db3000c1-08a3-4607-8e3f-143b4acc639f","Type":"ContainerStarted","Data":"7bb2b9b57775cb3ddba315b7c2eab470693d182367c8a85e9528dc830d557ac9"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.778538 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" event={"ID":"c9bc9b0e-b690-47c1-92ea-bea335fc0b41","Type":"ContainerStarted","Data":"f7f999b8f60af9668e64f6ed323755c010d5338d6b72028586008c48434f8dbc"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.781371 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" event={"ID":"5140d771-5948-4407-b1d9-aa1aa80415a6","Type":"ContainerStarted","Data":"8059cc8216706fbd54ca4c4d178e1de4e5642cdf341f22575d19963d1f131830"} Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.790329 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.796270 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" event={"ID":"58579a35-1ab3-4610-9d38-66824866b438","Type":"ContainerStarted","Data":"0d62e59cb9a9653f9d2ea99fa258589fc4b3c281485e0d6958818a58e23e8494"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.798438 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" event={"ID":"5d25f9f6-553d-477c-82f7-a25f017cb21a","Type":"ContainerStarted","Data":"70cf871fc1e32d36a8577aaf8bdc05f316595ef7fbf62f64cd53c2f73d67428f"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.802816 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" event={"ID":"bba916e9-436b-4c01-ba4c-2f758ed6d988","Type":"ContainerStarted","Data":"4f0d9a94cc3832e0a06bd59fd697512f507090d16cf2f7744ddf3d030d8c9173"} Jan 26 16:21:29 crc kubenswrapper[4680]: E0126 16:21:29.802916 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.805581 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" event={"ID":"7619b024-3fab-49a5-abec-5b31e09a5c51","Type":"ContainerStarted","Data":"c35ed5f21e7a1e200aa3297ae77a53e3ee3c3d1bb7660386aa3cc50faaf0886b"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.806872 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" event={"ID":"cb36e4de-bd33-4daf-83f5-1ced8ce56c90","Type":"ContainerStarted","Data":"a6c843f8729b1e61662951b5b6b87143791047b49f2b33e2c8c137219db4b8e7"} Jan 26 16:21:29 crc kubenswrapper[4680]: I0126 16:21:29.807965 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" event={"ID":"8bd876e3-9283-4de7-80b0-3c1787745bfb","Type":"ContainerStarted","Data":"36cbca347769ef17d1f1d1d4608389040c449fafe1b208de8dd58eaf33ad0426"} Jan 26 16:21:30 crc kubenswrapper[4680]: I0126 16:21:30.691206 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:30 crc kubenswrapper[4680]: E0126 16:21:30.691405 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:30 crc kubenswrapper[4680]: E0126 16:21:30.691488 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert podName:51e3cde0-6a23-4d62-83ca-fc16415da2bb nodeName:}" failed. No retries permitted until 2026-01-26 16:21:34.691469116 +0000 UTC m=+969.852741385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert") pod "infra-operator-controller-manager-694cf4f878-vxhhf" (UID: "51e3cde0-6a23-4d62-83ca-fc16415da2bb") : secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:30 crc kubenswrapper[4680]: E0126 16:21:30.826171 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" Jan 26 16:21:30 crc kubenswrapper[4680]: E0126 16:21:30.826418 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" Jan 26 16:21:30 crc kubenswrapper[4680]: E0126 16:21:30.826459 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" Jan 26 16:21:30 crc kubenswrapper[4680]: E0126 16:21:30.826493 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" Jan 26 16:21:31 crc kubenswrapper[4680]: I0126 16:21:31.210396 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:31 crc kubenswrapper[4680]: E0126 16:21:31.210549 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:31 crc kubenswrapper[4680]: E0126 16:21:31.210601 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert podName:db8c5f93-fbaf-4f34-9214-ec7e463beb79 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:35.210584606 +0000 UTC m=+970.371856875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" (UID: "db8c5f93-fbaf-4f34-9214-ec7e463beb79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:31 crc kubenswrapper[4680]: I0126 16:21:31.718877 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:31 crc kubenswrapper[4680]: I0126 16:21:31.718974 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:31 crc kubenswrapper[4680]: E0126 16:21:31.719088 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 16:21:31 crc kubenswrapper[4680]: E0126 16:21:31.719161 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:35.719142578 +0000 UTC m=+970.880414837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "webhook-server-cert" not found Jan 26 16:21:31 crc kubenswrapper[4680]: E0126 16:21:31.719180 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 16:21:31 crc kubenswrapper[4680]: E0126 16:21:31.719230 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:35.71921342 +0000 UTC m=+970.880485689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "metrics-server-cert" not found Jan 26 16:21:34 crc kubenswrapper[4680]: I0126 16:21:34.790893 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:34 crc kubenswrapper[4680]: E0126 16:21:34.791192 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:34 crc kubenswrapper[4680]: E0126 16:21:34.791369 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert podName:51e3cde0-6a23-4d62-83ca-fc16415da2bb nodeName:}" failed. No retries permitted until 2026-01-26 16:21:42.791348069 +0000 UTC m=+977.952620338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert") pod "infra-operator-controller-manager-694cf4f878-vxhhf" (UID: "51e3cde0-6a23-4d62-83ca-fc16415da2bb") : secret "infra-operator-webhook-server-cert" not found Jan 26 16:21:35 crc kubenswrapper[4680]: I0126 16:21:35.299382 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:35 crc kubenswrapper[4680]: E0126 16:21:35.300410 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:35 crc kubenswrapper[4680]: E0126 16:21:35.300462 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert podName:db8c5f93-fbaf-4f34-9214-ec7e463beb79 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:43.300448656 +0000 UTC m=+978.461720925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" (UID: "db8c5f93-fbaf-4f34-9214-ec7e463beb79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 16:21:35 crc kubenswrapper[4680]: I0126 16:21:35.808768 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:35 crc kubenswrapper[4680]: I0126 16:21:35.808867 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:35 crc kubenswrapper[4680]: E0126 16:21:35.809028 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 16:21:35 crc kubenswrapper[4680]: E0126 16:21:35.809101 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:43.80908443 +0000 UTC m=+978.970356699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "metrics-server-cert" not found Jan 26 16:21:35 crc kubenswrapper[4680]: E0126 16:21:35.809427 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 16:21:35 crc kubenswrapper[4680]: E0126 16:21:35.809456 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs podName:0ba4109b-0e34-4c97-884a-d70052bf8082 nodeName:}" failed. No retries permitted until 2026-01-26 16:21:43.80944744 +0000 UTC m=+978.970719709 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs") pod "openstack-operator-controller-manager-757cd979b5-zszgr" (UID: "0ba4109b-0e34-4c97-884a-d70052bf8082") : secret "webhook-server-cert" not found Jan 26 16:21:42 crc kubenswrapper[4680]: I0126 16:21:42.858309 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:42 crc kubenswrapper[4680]: I0126 16:21:42.864769 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51e3cde0-6a23-4d62-83ca-fc16415da2bb-cert\") pod \"infra-operator-controller-manager-694cf4f878-vxhhf\" (UID: \"51e3cde0-6a23-4d62-83ca-fc16415da2bb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.102490 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.367900 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.384510 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8c5f93-fbaf-4f34-9214-ec7e463beb79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854r9htd\" (UID: \"db8c5f93-fbaf-4f34-9214-ec7e463beb79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.616318 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.875044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.875226 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.879006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-metrics-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:43 crc kubenswrapper[4680]: I0126 16:21:43.879006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ba4109b-0e34-4c97-884a-d70052bf8082-webhook-certs\") pod \"openstack-operator-controller-manager-757cd979b5-zszgr\" (UID: \"0ba4109b-0e34-4c97-884a-d70052bf8082\") " pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:44 crc kubenswrapper[4680]: I0126 16:21:44.123575 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:21:46 crc kubenswrapper[4680]: I0126 16:21:46.980550 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:21:46 crc kubenswrapper[4680]: I0126 16:21:46.981607 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:21:47 crc kubenswrapper[4680]: E0126 16:21:47.550222 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 16:21:47 crc kubenswrapper[4680]: E0126 16:21:47.550682 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7nb5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-789vw_openstack-operators(5c81deb3-0ad3-4ec0-91af-837aee09d577): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:47 crc kubenswrapper[4680]: E0126 16:21:47.551904 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" Jan 26 16:21:48 crc kubenswrapper[4680]: E0126 16:21:48.048163 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" Jan 26 16:21:48 crc kubenswrapper[4680]: E0126 16:21:48.951739 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 16:21:48 crc kubenswrapper[4680]: E0126 16:21:48.951935 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkhbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8_openstack-operators(8bd876e3-9283-4de7-80b0-3c1787745bfb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:48 crc kubenswrapper[4680]: E0126 16:21:48.953191 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" podUID="8bd876e3-9283-4de7-80b0-3c1787745bfb" Jan 26 16:21:49 crc kubenswrapper[4680]: E0126 16:21:49.060298 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" podUID="8bd876e3-9283-4de7-80b0-3c1787745bfb" Jan 26 16:21:51 crc kubenswrapper[4680]: E0126 16:21:51.758659 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 26 16:21:51 crc kubenswrapper[4680]: E0126 16:21:51.759334 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r79f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-vpmf7_openstack-operators(7619b024-3fab-49a5-abec-5b31e09a5c51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:51 crc kubenswrapper[4680]: E0126 16:21:51.760564 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" podUID="7619b024-3fab-49a5-abec-5b31e09a5c51" Jan 26 16:21:52 crc kubenswrapper[4680]: E0126 16:21:52.079121 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" podUID="7619b024-3fab-49a5-abec-5b31e09a5c51" Jan 26 16:21:52 crc kubenswrapper[4680]: E0126 16:21:52.383158 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 16:21:52 crc kubenswrapper[4680]: E0126 16:21:52.383635 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xmxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-qjcpj_openstack-operators(cb36e4de-bd33-4daf-83f5-1ced8ce56c90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:52 crc kubenswrapper[4680]: E0126 16:21:52.384809 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podUID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.085051 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podUID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.114777 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.114963 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jbwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-sl65h_openstack-operators(bda51148-a37f-4767-8970-51ea56ecdfc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.116521 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" podUID="bda51148-a37f-4767-8970-51ea56ecdfc7" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.642382 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.642608 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mr6g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-pjtw6_openstack-operators(4ef5b147-3e74-4417-9c89-e0f33fc62eba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:53 crc kubenswrapper[4680]: E0126 16:21:53.643822 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" podUID="4ef5b147-3e74-4417-9c89-e0f33fc62eba" Jan 26 16:21:54 crc kubenswrapper[4680]: E0126 16:21:54.092968 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" podUID="bda51148-a37f-4767-8970-51ea56ecdfc7" Jan 26 16:21:54 crc kubenswrapper[4680]: E0126 16:21:54.093091 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" podUID="4ef5b147-3e74-4417-9c89-e0f33fc62eba" Jan 26 16:21:54 crc kubenswrapper[4680]: E0126 16:21:54.996555 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 16:21:54 crc kubenswrapper[4680]: E0126 16:21:54.996756 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qjf5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-xjmwr_openstack-operators(bb8599b0-8155-440a-a0f5-505f73113a1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:54 crc kubenswrapper[4680]: E0126 16:21:54.998042 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" podUID="bb8599b0-8155-440a-a0f5-505f73113a1c" Jan 26 16:21:55 crc kubenswrapper[4680]: E0126 16:21:55.098861 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" podUID="bb8599b0-8155-440a-a0f5-505f73113a1c" Jan 26 16:21:55 crc kubenswrapper[4680]: E0126 16:21:55.642577 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 26 16:21:55 crc kubenswrapper[4680]: E0126 16:21:55.643029 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kbqdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-xbcl9_openstack-operators(505c1441-c509-4792-ac15-8b218143a69f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:55 crc kubenswrapper[4680]: E0126 16:21:55.644248 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" podUID="505c1441-c509-4792-ac15-8b218143a69f" Jan 26 16:21:56 crc kubenswrapper[4680]: E0126 16:21:56.104985 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" podUID="505c1441-c509-4792-ac15-8b218143a69f" Jan 26 16:21:59 crc kubenswrapper[4680]: I0126 16:21:59.172733 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:21:59 crc kubenswrapper[4680]: E0126 16:21:59.213765 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 16:21:59 crc kubenswrapper[4680]: E0126 16:21:59.214117 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkvs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-xmpgf_openstack-operators(c4788302-e01e-485b-b716-a6db7a2ac272): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:21:59 crc kubenswrapper[4680]: E0126 16:21:59.215414 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" podUID="c4788302-e01e-485b-b716-a6db7a2ac272" Jan 26 16:22:00 crc kubenswrapper[4680]: E0126 16:22:00.130954 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" podUID="c4788302-e01e-485b-b716-a6db7a2ac272" Jan 26 16:22:01 crc kubenswrapper[4680]: E0126 16:22:01.263254 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 26 16:22:01 crc kubenswrapper[4680]: E0126 16:22:01.263851 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj2cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-lhqbm_openstack-operators(e9eb4184-e77b-49c1-b4af-cae5dc77b953): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:01 crc kubenswrapper[4680]: E0126 16:22:01.266064 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" podUID="e9eb4184-e77b-49c1-b4af-cae5dc77b953" Jan 26 16:22:01 crc kubenswrapper[4680]: E0126 16:22:01.794795 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 16:22:01 crc kubenswrapper[4680]: E0126 16:22:01.795017 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nld4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-5gkjr_openstack-operators(f79a7334-75ae-40a1-81c3-ce27e0567de9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:01 crc kubenswrapper[4680]: E0126 16:22:01.796302 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" podUID="f79a7334-75ae-40a1-81c3-ce27e0567de9" Jan 26 16:22:02 crc kubenswrapper[4680]: E0126 16:22:02.151283 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" podUID="e9eb4184-e77b-49c1-b4af-cae5dc77b953" Jan 26 16:22:02 crc kubenswrapper[4680]: E0126 16:22:02.154181 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" podUID="f79a7334-75ae-40a1-81c3-ce27e0567de9" Jan 26 16:22:05 crc kubenswrapper[4680]: E0126 16:22:05.751309 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 16:22:05 crc kubenswrapper[4680]: E0126 16:22:05.751818 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-chbdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-t4zl4_openstack-operators(58579a35-1ab3-4610-9d38-66824866b438): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:05 crc kubenswrapper[4680]: E0126 16:22:05.753829 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.171761 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.292275 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.292493 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47n6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-chmcm_openstack-operators(5d25f9f6-553d-477c-82f7-a25f017cb21a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.293747 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.838194 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.838354 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhm4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-58jzp_openstack-operators(923528ea-e48b-477c-aa11-6912e8167448): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:06 crc kubenswrapper[4680]: E0126 16:22:06.840064 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" Jan 26 16:22:09 crc kubenswrapper[4680]: E0126 16:22:09.514341 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 16:22:09 crc kubenswrapper[4680]: E0126 16:22:09.515771 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlxp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-mksvz_openstack-operators(5140d771-5948-4407-b1d9-aa1aa80415a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:09 crc kubenswrapper[4680]: E0126 16:22:09.516941 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" Jan 26 16:22:10 crc kubenswrapper[4680]: E0126 16:22:10.456317 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 26 16:22:10 crc kubenswrapper[4680]: E0126 16:22:10.456733 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-967dw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-fthk4_openstack-operators(19dbdff9-08dd-449c-8794-20b497c7119d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:10 crc kubenswrapper[4680]: E0126 16:22:10.457835 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" Jan 26 16:22:12 crc kubenswrapper[4680]: E0126 16:22:12.245740 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 16:22:12 crc kubenswrapper[4680]: E0126 16:22:12.246219 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwbjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-8llvz_openstack-operators(bba916e9-436b-4c01-ba4c-2f758ed6d988): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:12 crc kubenswrapper[4680]: E0126 16:22:12.248257 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" podUID="bba916e9-436b-4c01-ba4c-2f758ed6d988" Jan 26 16:22:12 crc kubenswrapper[4680]: E0126 16:22:12.694639 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 16:22:12 crc kubenswrapper[4680]: E0126 16:22:12.694827 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ps5ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4tmgz_openstack-operators(db3000c1-08a3-4607-8e3f-143b4acc639f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:22:12 crc kubenswrapper[4680]: E0126 16:22:12.696015 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" podUID="db3000c1-08a3-4607-8e3f-143b4acc639f" Jan 26 16:22:13 crc kubenswrapper[4680]: E0126 16:22:13.242282 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" podUID="bba916e9-436b-4c01-ba4c-2f758ed6d988" Jan 26 16:22:13 crc kubenswrapper[4680]: E0126 16:22:13.242291 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" podUID="db3000c1-08a3-4607-8e3f-143b4acc639f" Jan 26 16:22:13 crc kubenswrapper[4680]: I0126 16:22:13.335986 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr"] Jan 26 16:22:13 crc kubenswrapper[4680]: I0126 16:22:13.405306 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf"] Jan 26 16:22:13 crc kubenswrapper[4680]: I0126 16:22:13.689007 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd"] Jan 26 16:22:13 crc kubenswrapper[4680]: W0126 16:22:13.717022 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb8c5f93_fbaf_4f34_9214_ec7e463beb79.slice/crio-f07166856950d08ed96f941e4c788bf76d62c316c09b16a42891498a91d738e7 WatchSource:0}: Error finding container f07166856950d08ed96f941e4c788bf76d62c316c09b16a42891498a91d738e7: Status 404 returned error can't find the container with id f07166856950d08ed96f941e4c788bf76d62c316c09b16a42891498a91d738e7 Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.245225 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" event={"ID":"8bd876e3-9283-4de7-80b0-3c1787745bfb","Type":"ContainerStarted","Data":"e76821e88e12c4e623f95f707d71c6a53dbbd604ade2de49ffe8890cc33cdabf"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.245993 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.248343 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" event={"ID":"4ef5b147-3e74-4417-9c89-e0f33fc62eba","Type":"ContainerStarted","Data":"0a066773da03b4cdd571a4d5e434147d5c10f4d03e220b6be44ca365b1c9a8a1"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.248774 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.249839 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" event={"ID":"51e3cde0-6a23-4d62-83ca-fc16415da2bb","Type":"ContainerStarted","Data":"01b9fb46b286aebaf82e4901d3dcd444e75c5efc26bf807706c54dd2f5bb6c5d"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.251106 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" event={"ID":"5c81deb3-0ad3-4ec0-91af-837aee09d577","Type":"ContainerStarted","Data":"13817b73a3b5874f8457cfbbcc68f2ed27600bb263af6f1e9b9092ab558bae88"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.251436 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.252733 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" event={"ID":"7619b024-3fab-49a5-abec-5b31e09a5c51","Type":"ContainerStarted","Data":"2c0cf87f79b251cbe632c68083625a53218f101c8488854f97bb2eb7f0e238b8"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.253040 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.254383 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" event={"ID":"bb8599b0-8155-440a-a0f5-505f73113a1c","Type":"ContainerStarted","Data":"ee28209194f02cb31abc16d2a243eb302051f6048fcebc7ca8a3c770cd567f90"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.254684 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.255957 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" event={"ID":"505c1441-c509-4792-ac15-8b218143a69f","Type":"ContainerStarted","Data":"8f9ef36f25a9616ed6afc162038e792e5819eb0958f95bba5455ce79d1a0b817"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.256279 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.257438 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" event={"ID":"bda51148-a37f-4767-8970-51ea56ecdfc7","Type":"ContainerStarted","Data":"990529f3fcfa8fe72d855a069bb0c66f7ac6ed98e662ad0d1e22a6c83deb7af5"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.257751 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.259059 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" event={"ID":"c9bc9b0e-b690-47c1-92ea-bea335fc0b41","Type":"ContainerStarted","Data":"81c12121e4e9f48b25a681a397252bd83835e51502babae0efc0e3b161e4fd64"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.259379 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.261009 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" event={"ID":"db8c5f93-fbaf-4f34-9214-ec7e463beb79","Type":"ContainerStarted","Data":"f07166856950d08ed96f941e4c788bf76d62c316c09b16a42891498a91d738e7"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.261968 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" event={"ID":"27eb5e1a-3047-4e87-9ad1-f948e11dfe25","Type":"ContainerStarted","Data":"51174230eb6925176d4a979efd4998e070b4ee7a3ba6d62f880e406d4d334059"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.262313 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.263242 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" event={"ID":"cb36e4de-bd33-4daf-83f5-1ced8ce56c90","Type":"ContainerStarted","Data":"4010afe04aecf62fe6379c58ef15bae6fc63e5aa76a6eb9c59aea57ae7726747"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.263534 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.265090 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" event={"ID":"c4788302-e01e-485b-b716-a6db7a2ac272","Type":"ContainerStarted","Data":"3a5c8662c090ed2563e0e852c4113db7062512d19d779887c9c4b3bca7a6c298"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.265550 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.266341 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" event={"ID":"0ba4109b-0e34-4c97-884a-d70052bf8082","Type":"ContainerStarted","Data":"70d8fb47e267e041d0b24e497a714277c5271df70a5f31140da1d9d4d8ebf194"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.266382 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" event={"ID":"0ba4109b-0e34-4c97-884a-d70052bf8082","Type":"ContainerStarted","Data":"68e46448014c80653fc159808a1310a559e2ea94dad109513fe6deb656c71268"} Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.266845 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.289113 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" podStartSLOduration=4.605455635 podStartE2EDuration="48.289098225s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.19261953 +0000 UTC m=+964.353891799" lastFinishedPulling="2026-01-26 16:22:12.87626212 +0000 UTC m=+1008.037534389" observedRunningTime="2026-01-26 16:22:14.288638332 +0000 UTC m=+1009.449910601" watchObservedRunningTime="2026-01-26 16:22:14.289098225 +0000 UTC m=+1009.450370484" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.373981 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" podStartSLOduration=4.291021453 podStartE2EDuration="48.373966114s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:28.814251833 +0000 UTC m=+963.975524102" lastFinishedPulling="2026-01-26 16:22:12.897196494 +0000 UTC m=+1008.058468763" observedRunningTime="2026-01-26 16:22:14.367116759 +0000 UTC m=+1009.528389028" watchObservedRunningTime="2026-01-26 16:22:14.373966114 +0000 UTC m=+1009.535238383" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.375173 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" podStartSLOduration=4.078094847 podStartE2EDuration="47.375168958s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.600171134 +0000 UTC m=+964.761443403" lastFinishedPulling="2026-01-26 16:22:12.897245245 +0000 UTC m=+1008.058517514" observedRunningTime="2026-01-26 16:22:14.329959115 +0000 UTC m=+1009.491231384" watchObservedRunningTime="2026-01-26 16:22:14.375168958 +0000 UTC m=+1009.536441217" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.398507 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podStartSLOduration=5.284004845 podStartE2EDuration="48.39849501s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.658281348 +0000 UTC m=+964.819553617" lastFinishedPulling="2026-01-26 16:22:12.772771513 +0000 UTC m=+1007.934043782" observedRunningTime="2026-01-26 16:22:14.395403462 +0000 UTC m=+1009.556675731" watchObservedRunningTime="2026-01-26 16:22:14.39849501 +0000 UTC m=+1009.559767279" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.422398 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" podStartSLOduration=4.798527349 podStartE2EDuration="48.422381628s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.273340925 +0000 UTC m=+964.434613194" lastFinishedPulling="2026-01-26 16:22:12.897195204 +0000 UTC m=+1008.058467473" observedRunningTime="2026-01-26 16:22:14.421606436 +0000 UTC m=+1009.582878705" watchObservedRunningTime="2026-01-26 16:22:14.422381628 +0000 UTC m=+1009.583653897" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.452934 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" podStartSLOduration=3.5688969200000003 podStartE2EDuration="48.452917454s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:28.182170935 +0000 UTC m=+963.343443204" lastFinishedPulling="2026-01-26 16:22:13.066191469 +0000 UTC m=+1008.227463738" observedRunningTime="2026-01-26 16:22:14.446945635 +0000 UTC m=+1009.608217904" watchObservedRunningTime="2026-01-26 16:22:14.452917454 +0000 UTC m=+1009.614189713" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.480511 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" podStartSLOduration=3.996221315 podStartE2EDuration="47.480495097s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.58096825 +0000 UTC m=+964.742240519" lastFinishedPulling="2026-01-26 16:22:13.065242032 +0000 UTC m=+1008.226514301" observedRunningTime="2026-01-26 16:22:14.477322197 +0000 UTC m=+1009.638594466" watchObservedRunningTime="2026-01-26 16:22:14.480495097 +0000 UTC m=+1009.641767366" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.572027 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" podStartSLOduration=47.572011084 podStartE2EDuration="47.572011084s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:22:14.57149939 +0000 UTC m=+1009.732771659" watchObservedRunningTime="2026-01-26 16:22:14.572011084 +0000 UTC m=+1009.733283353" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.599933 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" podStartSLOduration=8.333199272 podStartE2EDuration="48.599917196s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.270673999 +0000 UTC m=+964.431946268" lastFinishedPulling="2026-01-26 16:22:09.537391883 +0000 UTC m=+1004.698664192" observedRunningTime="2026-01-26 16:22:14.596216981 +0000 UTC m=+1009.757489250" watchObservedRunningTime="2026-01-26 16:22:14.599917196 +0000 UTC m=+1009.761189465" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.626887 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" podStartSLOduration=4.977196921 podStartE2EDuration="48.626872541s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.246707201 +0000 UTC m=+964.407979470" lastFinishedPulling="2026-01-26 16:22:12.896382821 +0000 UTC m=+1008.057655090" observedRunningTime="2026-01-26 16:22:14.622546758 +0000 UTC m=+1009.783819027" watchObservedRunningTime="2026-01-26 16:22:14.626872541 +0000 UTC m=+1009.788144810" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.653801 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podStartSLOduration=4.344356165 podStartE2EDuration="47.653783675s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.566794049 +0000 UTC m=+964.728066318" lastFinishedPulling="2026-01-26 16:22:12.876221559 +0000 UTC m=+1008.037493828" observedRunningTime="2026-01-26 16:22:14.649852793 +0000 UTC m=+1009.811125052" watchObservedRunningTime="2026-01-26 16:22:14.653783675 +0000 UTC m=+1009.815055944" Jan 26 16:22:14 crc kubenswrapper[4680]: I0126 16:22:14.679779 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podStartSLOduration=8.071019569 podStartE2EDuration="48.679760362s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:28.92864668 +0000 UTC m=+964.089918949" lastFinishedPulling="2026-01-26 16:22:09.537387473 +0000 UTC m=+1004.698659742" observedRunningTime="2026-01-26 16:22:14.679018271 +0000 UTC m=+1009.840290540" watchObservedRunningTime="2026-01-26 16:22:14.679760362 +0000 UTC m=+1009.841032631" Jan 26 16:22:16 crc kubenswrapper[4680]: I0126 16:22:16.980458 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:22:16 crc kubenswrapper[4680]: I0126 16:22:16.980507 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:22:17 crc kubenswrapper[4680]: E0126 16:22:17.173301 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.027022 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.289764 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" event={"ID":"db8c5f93-fbaf-4f34-9214-ec7e463beb79","Type":"ContainerStarted","Data":"3df85599ace1f883339ccfd636804b7ca4fb3f308a8aab0c2c1700e69f5cebec"} Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.290115 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.291129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" event={"ID":"f79a7334-75ae-40a1-81c3-ce27e0567de9","Type":"ContainerStarted","Data":"d40d97007471e20cd9ebb788679c114bbcfa1e8b38730d29ab05ce38530fe719"} Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.291519 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.292521 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" event={"ID":"51e3cde0-6a23-4d62-83ca-fc16415da2bb","Type":"ContainerStarted","Data":"ae1c784789ab8f8496b288035088f5c517f913ccc81fe5edff5b242892fc378b"} Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.292767 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.293740 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" event={"ID":"e9eb4184-e77b-49c1-b4af-cae5dc77b953","Type":"ContainerStarted","Data":"271536486eb999782823e130beac5f25c2fcbde3238d10a02a0a3f75d34fba9a"} Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.293901 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.318215 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" podStartSLOduration=47.685072423 podStartE2EDuration="51.31819645s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:22:13.719429118 +0000 UTC m=+1008.880701387" lastFinishedPulling="2026-01-26 16:22:17.352553145 +0000 UTC m=+1012.513825414" observedRunningTime="2026-01-26 16:22:18.315875614 +0000 UTC m=+1013.477147883" watchObservedRunningTime="2026-01-26 16:22:18.31819645 +0000 UTC m=+1013.479468719" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.351650 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" podStartSLOduration=48.45205552 podStartE2EDuration="52.351625549s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:22:13.448121668 +0000 UTC m=+1008.609393937" lastFinishedPulling="2026-01-26 16:22:17.347691697 +0000 UTC m=+1012.508963966" observedRunningTime="2026-01-26 16:22:18.346256856 +0000 UTC m=+1013.507529125" watchObservedRunningTime="2026-01-26 16:22:18.351625549 +0000 UTC m=+1013.512897818" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.372148 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" podStartSLOduration=4.0706768 podStartE2EDuration="52.372131661s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.316148836 +0000 UTC m=+964.477421095" lastFinishedPulling="2026-01-26 16:22:17.617603687 +0000 UTC m=+1012.778875956" observedRunningTime="2026-01-26 16:22:18.370138744 +0000 UTC m=+1013.531411013" watchObservedRunningTime="2026-01-26 16:22:18.372131661 +0000 UTC m=+1013.533403930" Jan 26 16:22:18 crc kubenswrapper[4680]: I0126 16:22:18.389195 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" podStartSLOduration=3.22286949 podStartE2EDuration="52.389175824s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:28.181384033 +0000 UTC m=+963.342656302" lastFinishedPulling="2026-01-26 16:22:17.347690367 +0000 UTC m=+1012.508962636" observedRunningTime="2026-01-26 16:22:18.384559013 +0000 UTC m=+1013.545831282" watchObservedRunningTime="2026-01-26 16:22:18.389175824 +0000 UTC m=+1013.550448093" Jan 26 16:22:19 crc kubenswrapper[4680]: E0126 16:22:19.170808 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" Jan 26 16:22:19 crc kubenswrapper[4680]: I0126 16:22:19.299845 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" event={"ID":"58579a35-1ab3-4610-9d38-66824866b438","Type":"ContainerStarted","Data":"a0c5f7103679d530bd07e96b34e34ded107bda0a36b8734db02901588d7a0f7c"} Jan 26 16:22:20 crc kubenswrapper[4680]: I0126 16:22:20.306607 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:22:20 crc kubenswrapper[4680]: I0126 16:22:20.325889 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podStartSLOduration=4.554241632 podStartE2EDuration="54.325872366s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:28.811902126 +0000 UTC m=+963.973174395" lastFinishedPulling="2026-01-26 16:22:18.58353285 +0000 UTC m=+1013.744805129" observedRunningTime="2026-01-26 16:22:20.320336679 +0000 UTC m=+1015.481608968" watchObservedRunningTime="2026-01-26 16:22:20.325872366 +0000 UTC m=+1015.487144635" Jan 26 16:22:22 crc kubenswrapper[4680]: E0126 16:22:22.172227 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" Jan 26 16:22:23 crc kubenswrapper[4680]: I0126 16:22:23.110408 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" Jan 26 16:22:23 crc kubenswrapper[4680]: I0126 16:22:23.623142 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 16:22:24 crc kubenswrapper[4680]: I0126 16:22:24.130468 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 16:22:24 crc kubenswrapper[4680]: E0126 16:22:24.223902 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.342494 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" event={"ID":"bba916e9-436b-4c01-ba4c-2f758ed6d988","Type":"ContainerStarted","Data":"25a62183ad1af2e0853dd60075d74a36877cd7d978102aa191dfef524c3e136b"} Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.343537 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.357905 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" podStartSLOduration=3.905279221 podStartE2EDuration="1m0.357889415s" podCreationTimestamp="2026-01-26 16:21:26 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.243928272 +0000 UTC m=+964.405200541" lastFinishedPulling="2026-01-26 16:22:25.696538466 +0000 UTC m=+1020.857810735" observedRunningTime="2026-01-26 16:22:26.354978973 +0000 UTC m=+1021.516251252" watchObservedRunningTime="2026-01-26 16:22:26.357889415 +0000 UTC m=+1021.519161684" Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.878688 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.891940 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.916609 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 16:22:26 crc kubenswrapper[4680]: I0126 16:22:26.992448 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.099056 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.137796 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.287280 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.347689 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.350258 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" event={"ID":"db3000c1-08a3-4607-8e3f-143b4acc639f","Type":"ContainerStarted","Data":"eb5a26cab564f1ae4ae9d971fd410f0fe182c3e856314c7d423ba0dae860a4ec"} Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.389100 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4tmgz" podStartSLOduration=3.44867262 podStartE2EDuration="1m0.389059559s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.649293174 +0000 UTC m=+964.810565443" lastFinishedPulling="2026-01-26 16:22:26.589680113 +0000 UTC m=+1021.750952382" observedRunningTime="2026-01-26 16:22:27.382622046 +0000 UTC m=+1022.543894315" watchObservedRunningTime="2026-01-26 16:22:27.389059559 +0000 UTC m=+1022.550331828" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.403403 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.466581 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.548053 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.700188 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 16:22:27 crc kubenswrapper[4680]: I0126 16:22:27.907725 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 16:22:29 crc kubenswrapper[4680]: I0126 16:22:29.363691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" event={"ID":"5d25f9f6-553d-477c-82f7-a25f017cb21a","Type":"ContainerStarted","Data":"a6ba8bccce8332fdf6bb09f8c3f57ea9cf2978cc776db02200efc0a3ad492c40"} Jan 26 16:22:29 crc kubenswrapper[4680]: I0126 16:22:29.364838 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:22:29 crc kubenswrapper[4680]: I0126 16:22:29.380456 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podStartSLOduration=3.426109598 podStartE2EDuration="1m2.380441414s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.674192738 +0000 UTC m=+964.835465007" lastFinishedPulling="2026-01-26 16:22:28.628524554 +0000 UTC m=+1023.789796823" observedRunningTime="2026-01-26 16:22:29.377624464 +0000 UTC m=+1024.538896733" watchObservedRunningTime="2026-01-26 16:22:29.380441414 +0000 UTC m=+1024.541713673" Jan 26 16:22:32 crc kubenswrapper[4680]: I0126 16:22:32.381542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" event={"ID":"923528ea-e48b-477c-aa11-6912e8167448","Type":"ContainerStarted","Data":"08dfd226f28a24df975b25ecb6b882b49c1ad9e104fe112cf2599e54c0b90165"} Jan 26 16:22:32 crc kubenswrapper[4680]: I0126 16:22:32.382922 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:22:32 crc kubenswrapper[4680]: I0126 16:22:32.398325 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podStartSLOduration=3.443084648 podStartE2EDuration="1m5.398306529s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.687527806 +0000 UTC m=+964.848800075" lastFinishedPulling="2026-01-26 16:22:31.642749687 +0000 UTC m=+1026.804021956" observedRunningTime="2026-01-26 16:22:32.393611196 +0000 UTC m=+1027.554883465" watchObservedRunningTime="2026-01-26 16:22:32.398306529 +0000 UTC m=+1027.559578798" Jan 26 16:22:36 crc kubenswrapper[4680]: I0126 16:22:36.405163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" event={"ID":"5140d771-5948-4407-b1d9-aa1aa80415a6","Type":"ContainerStarted","Data":"9166f353dff7ab22093551c1f76caaeffaf026ecffe82702d10c340a51107988"} Jan 26 16:22:36 crc kubenswrapper[4680]: I0126 16:22:36.405714 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:22:36 crc kubenswrapper[4680]: I0126 16:22:36.432510 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podStartSLOduration=3.540822352 podStartE2EDuration="1m9.432478328s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.692692672 +0000 UTC m=+964.853964941" lastFinishedPulling="2026-01-26 16:22:35.584348648 +0000 UTC m=+1030.745620917" observedRunningTime="2026-01-26 16:22:36.418030847 +0000 UTC m=+1031.579303116" watchObservedRunningTime="2026-01-26 16:22:36.432478328 +0000 UTC m=+1031.593750617" Jan 26 16:22:37 crc kubenswrapper[4680]: I0126 16:22:37.311011 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" Jan 26 16:22:38 crc kubenswrapper[4680]: I0126 16:22:38.107554 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 16:22:38 crc kubenswrapper[4680]: I0126 16:22:38.418514 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" event={"ID":"19dbdff9-08dd-449c-8794-20b497c7119d","Type":"ContainerStarted","Data":"c6e4c3d7027316a89d47bca2a3c4469bc57548821fb0c30d5fd07c4e1941d5b1"} Jan 26 16:22:38 crc kubenswrapper[4680]: I0126 16:22:38.418770 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:22:38 crc kubenswrapper[4680]: I0126 16:22:38.440678 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podStartSLOduration=3.564008779 podStartE2EDuration="1m11.440663179s" podCreationTimestamp="2026-01-26 16:21:27 +0000 UTC" firstStartedPulling="2026-01-26 16:21:29.687505045 +0000 UTC m=+964.848777314" lastFinishedPulling="2026-01-26 16:22:37.564159445 +0000 UTC m=+1032.725431714" observedRunningTime="2026-01-26 16:22:38.43504958 +0000 UTC m=+1033.596321849" watchObservedRunningTime="2026-01-26 16:22:38.440663179 +0000 UTC m=+1033.601935438" Jan 26 16:22:38 crc kubenswrapper[4680]: I0126 16:22:38.462853 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" Jan 26 16:22:46 crc kubenswrapper[4680]: I0126 16:22:46.980588 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:22:46 crc kubenswrapper[4680]: I0126 16:22:46.981176 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:22:46 crc kubenswrapper[4680]: I0126 16:22:46.981220 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:22:46 crc kubenswrapper[4680]: I0126 16:22:46.981802 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"079abaf394e020c632241b295deb36fe6541d49138372b5520640414dceac2e9"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:22:46 crc kubenswrapper[4680]: I0126 16:22:46.981853 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://079abaf394e020c632241b295deb36fe6541d49138372b5520640414dceac2e9" gracePeriod=600 Jan 26 16:22:47 crc kubenswrapper[4680]: I0126 16:22:47.477142 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="079abaf394e020c632241b295deb36fe6541d49138372b5520640414dceac2e9" exitCode=0 Jan 26 16:22:47 crc kubenswrapper[4680]: I0126 16:22:47.477674 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"079abaf394e020c632241b295deb36fe6541d49138372b5520640414dceac2e9"} Jan 26 16:22:47 crc kubenswrapper[4680]: I0126 16:22:47.477706 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"30efb2e6cfd89156d3b5b947e16c8c7445b6d65d474e4ed3ab4ec65fec606211"} Jan 26 16:22:47 crc kubenswrapper[4680]: I0126 16:22:47.477725 4680 scope.go:117] "RemoveContainer" containerID="e6ea51382c2431c8381beef85985fd3da79a05f0dd4a6e879c92eee56a2edc94" Jan 26 16:22:47 crc kubenswrapper[4680]: I0126 16:22:47.750901 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" Jan 26 16:22:48 crc kubenswrapper[4680]: I0126 16:22:48.264037 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.947568 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b5c89f94f-b9zqg"] Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.949486 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.957384 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.957585 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.957696 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.957830 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jq8gz" Jan 26 16:23:05 crc kubenswrapper[4680]: I0126 16:23:05.964112 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b5c89f94f-b9zqg"] Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.021892 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6986454697-ncwsq"] Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.027219 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.030871 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6986454697-ncwsq"] Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.039209 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.062540 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-config\") pod \"dnsmasq-dns-6b5c89f94f-b9zqg\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.062601 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsbw\" (UniqueName: \"kubernetes.io/projected/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-kube-api-access-xqsbw\") pod \"dnsmasq-dns-6b5c89f94f-b9zqg\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.163430 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-dns-svc\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.163572 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-config\") pod \"dnsmasq-dns-6b5c89f94f-b9zqg\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.163622 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-config\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.163651 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqsbw\" (UniqueName: \"kubernetes.io/projected/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-kube-api-access-xqsbw\") pod \"dnsmasq-dns-6b5c89f94f-b9zqg\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.163676 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbfb\" (UniqueName: \"kubernetes.io/projected/464e4147-309e-4da7-bf19-642ba7e5433a-kube-api-access-fvbfb\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.164742 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-config\") pod \"dnsmasq-dns-6b5c89f94f-b9zqg\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.189239 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqsbw\" (UniqueName: \"kubernetes.io/projected/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-kube-api-access-xqsbw\") pod \"dnsmasq-dns-6b5c89f94f-b9zqg\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.264435 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-dns-svc\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.264562 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-config\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.264586 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvbfb\" (UniqueName: \"kubernetes.io/projected/464e4147-309e-4da7-bf19-642ba7e5433a-kube-api-access-fvbfb\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.265492 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-dns-svc\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.265536 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-config\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.281344 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.288305 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvbfb\" (UniqueName: \"kubernetes.io/projected/464e4147-309e-4da7-bf19-642ba7e5433a-kube-api-access-fvbfb\") pod \"dnsmasq-dns-6986454697-ncwsq\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.362443 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.722740 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b5c89f94f-b9zqg"] Jan 26 16:23:06 crc kubenswrapper[4680]: I0126 16:23:06.842481 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6986454697-ncwsq"] Jan 26 16:23:06 crc kubenswrapper[4680]: W0126 16:23:06.844749 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464e4147_309e_4da7_bf19_642ba7e5433a.slice/crio-35fcf15359c937830c8ee2a6066f8d4951305d14f86b2a22b467ca5a5148f7f5 WatchSource:0}: Error finding container 35fcf15359c937830c8ee2a6066f8d4951305d14f86b2a22b467ca5a5148f7f5: Status 404 returned error can't find the container with id 35fcf15359c937830c8ee2a6066f8d4951305d14f86b2a22b467ca5a5148f7f5 Jan 26 16:23:07 crc kubenswrapper[4680]: I0126 16:23:07.622817 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6986454697-ncwsq" event={"ID":"464e4147-309e-4da7-bf19-642ba7e5433a","Type":"ContainerStarted","Data":"35fcf15359c937830c8ee2a6066f8d4951305d14f86b2a22b467ca5a5148f7f5"} Jan 26 16:23:07 crc kubenswrapper[4680]: I0126 16:23:07.623942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" event={"ID":"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5","Type":"ContainerStarted","Data":"df7719db6f22bf3f33c76f2048a500b01a7520f63c67e17d41ea383206bf81db"} Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.626322 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6986454697-ncwsq"] Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.652483 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fc87b5b9-5wjrd"] Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.653600 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.678977 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fc87b5b9-5wjrd"] Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.816843 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-dns-svc\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.816902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbpb9\" (UniqueName: \"kubernetes.io/projected/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-kube-api-access-jbpb9\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.816977 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-config\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.918181 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-config\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.918243 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-dns-svc\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.918270 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbpb9\" (UniqueName: \"kubernetes.io/projected/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-kube-api-access-jbpb9\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.919384 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-dns-svc\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.919481 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-config\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.961528 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbpb9\" (UniqueName: \"kubernetes.io/projected/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-kube-api-access-jbpb9\") pod \"dnsmasq-dns-55fc87b5b9-5wjrd\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.970464 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:08 crc kubenswrapper[4680]: I0126 16:23:08.986041 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b5c89f94f-b9zqg"] Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.040234 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d4ff7b85-svzjv"] Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.053804 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.097955 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d4ff7b85-svzjv"] Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.230622 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-config\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.230662 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpkv\" (UniqueName: \"kubernetes.io/projected/662eacd6-016f-459a-806f-4bf940065b6a-kube-api-access-cmpkv\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.230779 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-dns-svc\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.335540 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-config\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.335925 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmpkv\" (UniqueName: \"kubernetes.io/projected/662eacd6-016f-459a-806f-4bf940065b6a-kube-api-access-cmpkv\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.336020 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-dns-svc\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.338784 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-config\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.343790 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-dns-svc\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.354925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmpkv\" (UniqueName: \"kubernetes.io/projected/662eacd6-016f-459a-806f-4bf940065b6a-kube-api-access-cmpkv\") pod \"dnsmasq-dns-86d4ff7b85-svzjv\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.403899 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.571519 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fc87b5b9-5wjrd"] Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.830984 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.833965 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.837397 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.837441 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.837517 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.837529 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.837707 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p6gcx" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.837857 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.838004 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.856700 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949110 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b7b1e0b-5218-426e-aca1-76d49633811c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949149 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949304 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949376 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949455 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b7b1e0b-5218-426e-aca1-76d49633811c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949524 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949593 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949613 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949649 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4gv\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-kube-api-access-xw4gv\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:09 crc kubenswrapper[4680]: I0126 16:23:09.949686 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051153 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051220 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051251 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051320 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4gv\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-kube-api-access-xw4gv\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051346 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051378 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b7b1e0b-5218-426e-aca1-76d49633811c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051395 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051414 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051431 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051458 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.051481 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b7b1e0b-5218-426e-aca1-76d49633811c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.053305 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.055554 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.057607 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.058038 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.058384 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.058725 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.059969 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b7b1e0b-5218-426e-aca1-76d49633811c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.062332 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.063249 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b7b1e0b-5218-426e-aca1-76d49633811c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.077014 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4gv\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-kube-api-access-xw4gv\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.077449 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.093932 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.143741 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.145357 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.147392 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.148014 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.150357 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xhs5p" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.150404 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.150357 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.150370 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.150423 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.165656 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.170461 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.254862 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad417dd7-c38c-4934-a895-d0253bb03494-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255196 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad417dd7-c38c-4934-a895-d0253bb03494-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255220 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmd8d\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-kube-api-access-kmd8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255255 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255328 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255357 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255376 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255403 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255421 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.255436 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356621 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356674 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356695 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356717 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356735 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356752 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356782 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad417dd7-c38c-4934-a895-d0253bb03494-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad417dd7-c38c-4934-a895-d0253bb03494-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356831 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmd8d\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-kube-api-access-kmd8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.356880 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.359084 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.360424 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.360436 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.361401 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.361496 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.362271 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.362399 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.364695 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.374241 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad417dd7-c38c-4934-a895-d0253bb03494-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.375177 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmd8d\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-kube-api-access-kmd8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.376642 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad417dd7-c38c-4934-a895-d0253bb03494-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.379561 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:10 crc kubenswrapper[4680]: I0126 16:23:10.488283 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.306242 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.308769 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.311376 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.320446 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.320799 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.320955 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-m8cf9" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.321151 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.322163 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.372556 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6476c77-06ae-4747-900e-41566a6063ca-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.372696 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.372774 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.372803 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6476c77-06ae-4747-900e-41566a6063ca-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.372861 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6476c77-06ae-4747-900e-41566a6063ca-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.372974 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z8b7\" (UniqueName: \"kubernetes.io/projected/e6476c77-06ae-4747-900e-41566a6063ca-kube-api-access-9z8b7\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.373007 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.373120 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.474797 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6476c77-06ae-4747-900e-41566a6063ca-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.474861 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6476c77-06ae-4747-900e-41566a6063ca-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.474908 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z8b7\" (UniqueName: \"kubernetes.io/projected/e6476c77-06ae-4747-900e-41566a6063ca-kube-api-access-9z8b7\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.474950 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.475029 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.475112 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6476c77-06ae-4747-900e-41566a6063ca-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.475140 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.475165 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.477114 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.477145 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.477748 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.477974 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6476c77-06ae-4747-900e-41566a6063ca-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.477983 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6476c77-06ae-4747-900e-41566a6063ca-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.481669 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6476c77-06ae-4747-900e-41566a6063ca-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.482641 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6476c77-06ae-4747-900e-41566a6063ca-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.493030 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z8b7\" (UniqueName: \"kubernetes.io/projected/e6476c77-06ae-4747-900e-41566a6063ca-kube-api-access-9z8b7\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.507189 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6476c77-06ae-4747-900e-41566a6063ca\") " pod="openstack/openstack-galera-0" Jan 26 16:23:11 crc kubenswrapper[4680]: I0126 16:23:11.639605 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.674132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" event={"ID":"8a0254d2-59ab-4ccd-93b8-246ee618bbf3","Type":"ContainerStarted","Data":"2e2e9bd3ee18ff3530c88689ee53d880dc7e5af48ecff895a1ed9d2c0c101049"} Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.707498 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.708729 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.712641 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.715894 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.716141 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6sv6h" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.716212 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.716220 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.794856 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.794909 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2sbq\" (UniqueName: \"kubernetes.io/projected/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-kube-api-access-n2sbq\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.794942 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.794976 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.794997 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.795037 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.795055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.795089 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896637 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2sbq\" (UniqueName: \"kubernetes.io/projected/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-kube-api-access-n2sbq\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896693 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896723 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896745 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896786 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896802 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896824 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.896889 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.897375 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.897736 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.897930 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.898643 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.898818 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.912060 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.912756 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.920828 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2sbq\" (UniqueName: \"kubernetes.io/projected/6e6f45ac-80ed-41f2-b9b8-94e60a1656d4-kube-api-access-n2sbq\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:12 crc kubenswrapper[4680]: I0126 16:23:12.938700 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4\") " pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.034341 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.190221 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.191117 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.192770 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5c594" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.194729 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.194938 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.218548 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.302098 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ecf947-885e-418d-871c-bbef3a8353fe-combined-ca-bundle\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.302162 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ecf947-885e-418d-871c-bbef3a8353fe-memcached-tls-certs\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.302408 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ecf947-885e-418d-871c-bbef3a8353fe-config-data\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.302559 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxfd\" (UniqueName: \"kubernetes.io/projected/85ecf947-885e-418d-871c-bbef3a8353fe-kube-api-access-bdxfd\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.302633 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85ecf947-885e-418d-871c-bbef3a8353fe-kolla-config\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.404523 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxfd\" (UniqueName: \"kubernetes.io/projected/85ecf947-885e-418d-871c-bbef3a8353fe-kube-api-access-bdxfd\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.405023 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85ecf947-885e-418d-871c-bbef3a8353fe-kolla-config\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.405078 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ecf947-885e-418d-871c-bbef3a8353fe-combined-ca-bundle\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.405106 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ecf947-885e-418d-871c-bbef3a8353fe-memcached-tls-certs\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.405164 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ecf947-885e-418d-871c-bbef3a8353fe-config-data\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.406260 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85ecf947-885e-418d-871c-bbef3a8353fe-kolla-config\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.406308 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ecf947-885e-418d-871c-bbef3a8353fe-config-data\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.409233 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ecf947-885e-418d-871c-bbef3a8353fe-combined-ca-bundle\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.409326 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ecf947-885e-418d-871c-bbef3a8353fe-memcached-tls-certs\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.425952 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxfd\" (UniqueName: \"kubernetes.io/projected/85ecf947-885e-418d-871c-bbef3a8353fe-kube-api-access-bdxfd\") pod \"memcached-0\" (UID: \"85ecf947-885e-418d-871c-bbef3a8353fe\") " pod="openstack/memcached-0" Jan 26 16:23:13 crc kubenswrapper[4680]: I0126 16:23:13.509188 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.110202 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.111673 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.123167 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.130261 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-l4b2b" Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.131055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct57d\" (UniqueName: \"kubernetes.io/projected/22f475ff-bcba-4cdd-a6ed-62be26882b42-kube-api-access-ct57d\") pod \"kube-state-metrics-0\" (UID: \"22f475ff-bcba-4cdd-a6ed-62be26882b42\") " pod="openstack/kube-state-metrics-0" Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.233030 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct57d\" (UniqueName: \"kubernetes.io/projected/22f475ff-bcba-4cdd-a6ed-62be26882b42-kube-api-access-ct57d\") pod \"kube-state-metrics-0\" (UID: \"22f475ff-bcba-4cdd-a6ed-62be26882b42\") " pod="openstack/kube-state-metrics-0" Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.261112 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct57d\" (UniqueName: \"kubernetes.io/projected/22f475ff-bcba-4cdd-a6ed-62be26882b42-kube-api-access-ct57d\") pod \"kube-state-metrics-0\" (UID: \"22f475ff-bcba-4cdd-a6ed-62be26882b42\") " pod="openstack/kube-state-metrics-0" Jan 26 16:23:15 crc kubenswrapper[4680]: I0126 16:23:15.431570 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:23:16 crc kubenswrapper[4680]: I0126 16:23:16.971015 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.468749 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c86j2"] Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.469845 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.473342 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.473698 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nsqqn" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.474147 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490043 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mscfm\" (UniqueName: \"kubernetes.io/projected/5db7b388-c09e-441f-88db-13916a2b9208-kube-api-access-mscfm\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490124 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-log-ovn\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490181 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db7b388-c09e-441f-88db-13916a2b9208-ovn-controller-tls-certs\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490272 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db7b388-c09e-441f-88db-13916a2b9208-scripts\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490311 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-run-ovn\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490332 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-run\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490356 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7b388-c09e-441f-88db-13916a2b9208-combined-ca-bundle\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.490509 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c86j2"] Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.501868 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f49hh"] Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.504204 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.562771 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f49hh"] Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.592572 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g8s\" (UniqueName: \"kubernetes.io/projected/19558e19-d16d-437a-87fd-2d02181963c8-kube-api-access-k8g8s\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.592726 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db7b388-c09e-441f-88db-13916a2b9208-scripts\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.592780 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-etc-ovs\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.593169 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19558e19-d16d-437a-87fd-2d02181963c8-scripts\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.593205 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-run-ovn\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.593329 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-run\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.593477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7b388-c09e-441f-88db-13916a2b9208-combined-ca-bundle\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.593868 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-run-ovn\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.594027 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-run\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.594157 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-run\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.594197 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mscfm\" (UniqueName: \"kubernetes.io/projected/5db7b388-c09e-441f-88db-13916a2b9208-kube-api-access-mscfm\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.594660 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-log-ovn\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.594841 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5db7b388-c09e-441f-88db-13916a2b9208-var-log-ovn\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.595042 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-lib\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.595239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-log\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.595324 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db7b388-c09e-441f-88db-13916a2b9208-ovn-controller-tls-certs\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.597120 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db7b388-c09e-441f-88db-13916a2b9208-scripts\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.603205 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db7b388-c09e-441f-88db-13916a2b9208-ovn-controller-tls-certs\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.610250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mscfm\" (UniqueName: \"kubernetes.io/projected/5db7b388-c09e-441f-88db-13916a2b9208-kube-api-access-mscfm\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.625760 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7b388-c09e-441f-88db-13916a2b9208-combined-ca-bundle\") pod \"ovn-controller-c86j2\" (UID: \"5db7b388-c09e-441f-88db-13916a2b9208\") " pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.696844 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8g8s\" (UniqueName: \"kubernetes.io/projected/19558e19-d16d-437a-87fd-2d02181963c8-kube-api-access-k8g8s\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.696891 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-etc-ovs\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.696917 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19558e19-d16d-437a-87fd-2d02181963c8-scripts\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.696962 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-run\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.696991 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-lib\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.697010 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-log\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.697323 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-log\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.697386 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-run\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.697601 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-var-lib\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.697807 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/19558e19-d16d-437a-87fd-2d02181963c8-etc-ovs\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.701184 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19558e19-d16d-437a-87fd-2d02181963c8-scripts\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.713801 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8g8s\" (UniqueName: \"kubernetes.io/projected/19558e19-d16d-437a-87fd-2d02181963c8-kube-api-access-k8g8s\") pod \"ovn-controller-ovs-f49hh\" (UID: \"19558e19-d16d-437a-87fd-2d02181963c8\") " pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.788827 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c86j2" Jan 26 16:23:18 crc kubenswrapper[4680]: I0126 16:23:18.828988 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.376182 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.377601 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.380764 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.381034 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.381142 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.381298 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.382347 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8gbqx" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.388573 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.508322 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdcb3cbb-b842-443f-9a47-749970d13f36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.508375 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcb3cbb-b842-443f-9a47-749970d13f36-config\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.508403 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.508858 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.509035 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.509059 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bc6\" (UniqueName: \"kubernetes.io/projected/cdcb3cbb-b842-443f-9a47-749970d13f36-kube-api-access-94bc6\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.510321 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.510361 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdcb3cbb-b842-443f-9a47-749970d13f36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611633 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611698 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdcb3cbb-b842-443f-9a47-749970d13f36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611764 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdcb3cbb-b842-443f-9a47-749970d13f36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611790 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcb3cbb-b842-443f-9a47-749970d13f36-config\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611881 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611911 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.611932 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94bc6\" (UniqueName: \"kubernetes.io/projected/cdcb3cbb-b842-443f-9a47-749970d13f36-kube-api-access-94bc6\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.614690 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.617224 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdcb3cbb-b842-443f-9a47-749970d13f36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.617988 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdcb3cbb-b842-443f-9a47-749970d13f36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.618644 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.620612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.626854 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdcb3cbb-b842-443f-9a47-749970d13f36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.627362 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcb3cbb-b842-443f-9a47-749970d13f36-config\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.645873 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.651211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94bc6\" (UniqueName: \"kubernetes.io/projected/cdcb3cbb-b842-443f-9a47-749970d13f36-kube-api-access-94bc6\") pod \"ovsdbserver-nb-0\" (UID: \"cdcb3cbb-b842-443f-9a47-749970d13f36\") " pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:19 crc kubenswrapper[4680]: I0126 16:23:19.708168 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.628729 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.631707 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.637147 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.637354 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.639715 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.641578 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.650015 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-r6sw8" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758051 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758118 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758143 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758166 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhh9r\" (UniqueName: \"kubernetes.io/projected/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-kube-api-access-nhh9r\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758209 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758257 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-config\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758297 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.758317 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.774568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad417dd7-c38c-4934-a895-d0253bb03494","Type":"ContainerStarted","Data":"f7ec228a1927a1a98103559e36930d05879265c6320b21245f43349f9b944a11"} Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.859452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-config\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.859804 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.859923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.859942 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.859975 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.859996 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.860018 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhh9r\" (UniqueName: \"kubernetes.io/projected/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-kube-api-access-nhh9r\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.860062 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.860611 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.861156 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.861272 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-config\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.863493 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.869973 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.870363 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.870790 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.882392 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.886364 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhh9r\" (UniqueName: \"kubernetes.io/projected/c5f856a6-fa8d-4fd6-8f69-a13b165488b4-kube-api-access-nhh9r\") pod \"ovsdbserver-sb-0\" (UID: \"c5f856a6-fa8d-4fd6-8f69-a13b165488b4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:22 crc kubenswrapper[4680]: I0126 16:23:22.956674 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:23 crc kubenswrapper[4680]: I0126 16:23:23.096705 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d4ff7b85-svzjv"] Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.567384 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.567448 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.567591 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvbfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6986454697-ncwsq_openstack(464e4147-309e-4da7-bf19-642ba7e5433a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.572444 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6986454697-ncwsq" podUID="464e4147-309e-4da7-bf19-642ba7e5433a" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.607800 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.607853 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.607966 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqsbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6b5c89f94f-b9zqg_openstack(918f0c8e-c5ca-40b1-9c4a-2e759679e5c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:23:23 crc kubenswrapper[4680]: E0126 16:23:23.609256 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" podUID="918f0c8e-c5ca-40b1-9c4a-2e759679e5c5" Jan 26 16:23:23 crc kubenswrapper[4680]: I0126 16:23:23.790740 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" event={"ID":"662eacd6-016f-459a-806f-4bf940065b6a","Type":"ContainerStarted","Data":"a7959fabc2181069f0928ae923088f2b5512c794e3de8a3f31ce0b0163c6a066"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.082277 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 16:23:24 crc kubenswrapper[4680]: W0126 16:23:24.082660 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85ecf947_885e_418d_871c_bbef3a8353fe.slice/crio-9f25a382e24c229bf4edcca839828882c71dbc7c076e7c924a231121ad469140 WatchSource:0}: Error finding container 9f25a382e24c229bf4edcca839828882c71dbc7c076e7c924a231121ad469140: Status 404 returned error can't find the container with id 9f25a382e24c229bf4edcca839828882c71dbc7c076e7c924a231121ad469140 Jan 26 16:23:24 crc kubenswrapper[4680]: W0126 16:23:24.099464 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b7b1e0b_5218_426e_aca1_76d49633811c.slice/crio-5ab71a6f154c7e0d61ee98f5008d41271e73eac4010ef0fcc61317c2bdcefa6a WatchSource:0}: Error finding container 5ab71a6f154c7e0d61ee98f5008d41271e73eac4010ef0fcc61317c2bdcefa6a: Status 404 returned error can't find the container with id 5ab71a6f154c7e0d61ee98f5008d41271e73eac4010ef0fcc61317c2bdcefa6a Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.114771 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.367224 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.451309 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.510104 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqsbw\" (UniqueName: \"kubernetes.io/projected/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-kube-api-access-xqsbw\") pod \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.510246 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-config\") pod \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\" (UID: \"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5\") " Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.510866 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-config" (OuterVolumeSpecName: "config") pod "918f0c8e-c5ca-40b1-9c4a-2e759679e5c5" (UID: "918f0c8e-c5ca-40b1-9c4a-2e759679e5c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.515777 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-kube-api-access-xqsbw" (OuterVolumeSpecName: "kube-api-access-xqsbw") pod "918f0c8e-c5ca-40b1-9c4a-2e759679e5c5" (UID: "918f0c8e-c5ca-40b1-9c4a-2e759679e5c5"). InnerVolumeSpecName "kube-api-access-xqsbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.585048 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.611961 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.612000 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqsbw\" (UniqueName: \"kubernetes.io/projected/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5-kube-api-access-xqsbw\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.706861 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.712931 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvbfb\" (UniqueName: \"kubernetes.io/projected/464e4147-309e-4da7-bf19-642ba7e5433a-kube-api-access-fvbfb\") pod \"464e4147-309e-4da7-bf19-642ba7e5433a\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.713039 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-dns-svc\") pod \"464e4147-309e-4da7-bf19-642ba7e5433a\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.713110 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-config\") pod \"464e4147-309e-4da7-bf19-642ba7e5433a\" (UID: \"464e4147-309e-4da7-bf19-642ba7e5433a\") " Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.713760 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-config" (OuterVolumeSpecName: "config") pod "464e4147-309e-4da7-bf19-642ba7e5433a" (UID: "464e4147-309e-4da7-bf19-642ba7e5433a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.714623 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "464e4147-309e-4da7-bf19-642ba7e5433a" (UID: "464e4147-309e-4da7-bf19-642ba7e5433a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.721938 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464e4147-309e-4da7-bf19-642ba7e5433a-kube-api-access-fvbfb" (OuterVolumeSpecName: "kube-api-access-fvbfb") pod "464e4147-309e-4da7-bf19-642ba7e5433a" (UID: "464e4147-309e-4da7-bf19-642ba7e5433a"). InnerVolumeSpecName "kube-api-access-fvbfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:24 crc kubenswrapper[4680]: W0126 16:23:24.743037 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdcb3cbb_b842_443f_9a47_749970d13f36.slice/crio-6f253f0886ca5d1479bb7ae0e3729d2c073ed1be0b9ffd50b9a4286f1b631d20 WatchSource:0}: Error finding container 6f253f0886ca5d1479bb7ae0e3729d2c073ed1be0b9ffd50b9a4286f1b631d20: Status 404 returned error can't find the container with id 6f253f0886ca5d1479bb7ae0e3729d2c073ed1be0b9ffd50b9a4286f1b631d20 Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.755059 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c86j2"] Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.785020 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.796950 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 16:23:24 crc kubenswrapper[4680]: W0126 16:23:24.812081 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db7b388_c09e_441f_88db_13916a2b9208.slice/crio-af470c8878cf1b4409689ce7756837a52cf03736d3ea5da47f21c1dfd75bd25d WatchSource:0}: Error finding container af470c8878cf1b4409689ce7756837a52cf03736d3ea5da47f21c1dfd75bd25d: Status 404 returned error can't find the container with id af470c8878cf1b4409689ce7756837a52cf03736d3ea5da47f21c1dfd75bd25d Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.812764 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cdcb3cbb-b842-443f-9a47-749970d13f36","Type":"ContainerStarted","Data":"6f253f0886ca5d1479bb7ae0e3729d2c073ed1be0b9ffd50b9a4286f1b631d20"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.815107 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b7b1e0b-5218-426e-aca1-76d49633811c","Type":"ContainerStarted","Data":"5ab71a6f154c7e0d61ee98f5008d41271e73eac4010ef0fcc61317c2bdcefa6a"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.815782 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvbfb\" (UniqueName: \"kubernetes.io/projected/464e4147-309e-4da7-bf19-642ba7e5433a-kube-api-access-fvbfb\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.815806 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.815818 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464e4147-309e-4da7-bf19-642ba7e5433a-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.818198 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6986454697-ncwsq" event={"ID":"464e4147-309e-4da7-bf19-642ba7e5433a","Type":"ContainerDied","Data":"35fcf15359c937830c8ee2a6066f8d4951305d14f86b2a22b467ca5a5148f7f5"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.818221 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6986454697-ncwsq" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.822431 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" event={"ID":"918f0c8e-c5ca-40b1-9c4a-2e759679e5c5","Type":"ContainerDied","Data":"df7719db6f22bf3f33c76f2048a500b01a7520f63c67e17d41ea383206bf81db"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.822552 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5c89f94f-b9zqg" Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.825480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"85ecf947-885e-418d-871c-bbef3a8353fe","Type":"ContainerStarted","Data":"9f25a382e24c229bf4edcca839828882c71dbc7c076e7c924a231121ad469140"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.829939 4680 generic.go:334] "Generic (PLEG): container finished" podID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerID="39691f5600e925638742440d0a5604bdc6d25a337109958b8b4c51200e0bde43" exitCode=0 Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.830024 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" event={"ID":"8a0254d2-59ab-4ccd-93b8-246ee618bbf3","Type":"ContainerDied","Data":"39691f5600e925638742440d0a5604bdc6d25a337109958b8b4c51200e0bde43"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.891164 4680 generic.go:334] "Generic (PLEG): container finished" podID="662eacd6-016f-459a-806f-4bf940065b6a" containerID="e5da89781776f8576f76e02b8e5e866b39eca83d4c58951e98752b266825c338" exitCode=0 Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.891233 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" event={"ID":"662eacd6-016f-459a-806f-4bf940065b6a","Type":"ContainerDied","Data":"e5da89781776f8576f76e02b8e5e866b39eca83d4c58951e98752b266825c338"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.912700 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4","Type":"ContainerStarted","Data":"964ff9f15fab6a93b5e8e79be92f0899a91618d83c5fbf2669936c3b7e935f51"} Jan 26 16:23:24 crc kubenswrapper[4680]: I0126 16:23:24.924335 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.032424 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b5c89f94f-b9zqg"] Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.051062 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b5c89f94f-b9zqg"] Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.063124 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6986454697-ncwsq"] Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.084029 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6986454697-ncwsq"] Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.093026 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f49hh"] Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.192171 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464e4147-309e-4da7-bf19-642ba7e5433a" path="/var/lib/kubelet/pods/464e4147-309e-4da7-bf19-642ba7e5433a/volumes" Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.192518 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918f0c8e-c5ca-40b1-9c4a-2e759679e5c5" path="/var/lib/kubelet/pods/918f0c8e-c5ca-40b1-9c4a-2e759679e5c5/volumes" Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.920997 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22f475ff-bcba-4cdd-a6ed-62be26882b42","Type":"ContainerStarted","Data":"fb92deda3149a8c326e20cfe548e76a960dcbdf2be5ae6e4345605d815de6a2c"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.924518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" event={"ID":"662eacd6-016f-459a-806f-4bf940065b6a","Type":"ContainerStarted","Data":"93a266f10df23be2dd520422f8354149c408ac104a2425cba95ea85885983787"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.924609 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.928204 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6476c77-06ae-4747-900e-41566a6063ca","Type":"ContainerStarted","Data":"98dba328d9e999f8d858ffd45aa4f5a2e29c2b7a1aea287cfa4a922d8fb34186"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.929275 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f49hh" event={"ID":"19558e19-d16d-437a-87fd-2d02181963c8","Type":"ContainerStarted","Data":"a4b7b7d198e878f6358593bb2525cb83eea32d027eee88bed8499bb516ce7554"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.932289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" event={"ID":"8a0254d2-59ab-4ccd-93b8-246ee618bbf3","Type":"ContainerStarted","Data":"c160b6b8c6c03e7fa7319eee8eb7a32c9e660c05df07f2080a47f02886c9a5f7"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.932638 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.936220 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c5f856a6-fa8d-4fd6-8f69-a13b165488b4","Type":"ContainerStarted","Data":"10391df7f44daeb01943d80daddc4d47e7ea54983809ab215f7ab97ac2c8f50a"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.944644 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c86j2" event={"ID":"5db7b388-c09e-441f-88db-13916a2b9208","Type":"ContainerStarted","Data":"af470c8878cf1b4409689ce7756837a52cf03736d3ea5da47f21c1dfd75bd25d"} Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.944708 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" podStartSLOduration=17.629718039 podStartE2EDuration="17.944690078s" podCreationTimestamp="2026-01-26 16:23:08 +0000 UTC" firstStartedPulling="2026-01-26 16:23:23.508847289 +0000 UTC m=+1078.670119558" lastFinishedPulling="2026-01-26 16:23:23.823819328 +0000 UTC m=+1078.985091597" observedRunningTime="2026-01-26 16:23:25.93947988 +0000 UTC m=+1081.100752149" watchObservedRunningTime="2026-01-26 16:23:25.944690078 +0000 UTC m=+1081.105962347" Jan 26 16:23:25 crc kubenswrapper[4680]: I0126 16:23:25.957684 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" podStartSLOduration=6.572863959 podStartE2EDuration="17.957668746s" podCreationTimestamp="2026-01-26 16:23:08 +0000 UTC" firstStartedPulling="2026-01-26 16:23:12.436024667 +0000 UTC m=+1067.597296946" lastFinishedPulling="2026-01-26 16:23:23.820829454 +0000 UTC m=+1078.982101733" observedRunningTime="2026-01-26 16:23:25.956648937 +0000 UTC m=+1081.117921216" watchObservedRunningTime="2026-01-26 16:23:25.957668746 +0000 UTC m=+1081.118941015" Jan 26 16:23:33 crc kubenswrapper[4680]: I0126 16:23:33.972022 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:34 crc kubenswrapper[4680]: I0126 16:23:34.406960 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:34 crc kubenswrapper[4680]: I0126 16:23:34.510338 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fc87b5b9-5wjrd"] Jan 26 16:23:34 crc kubenswrapper[4680]: I0126 16:23:34.510547 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerName="dnsmasq-dns" containerID="cri-o://c160b6b8c6c03e7fa7319eee8eb7a32c9e660c05df07f2080a47f02886c9a5f7" gracePeriod=10 Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.798187 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.800782 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.801440 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2sbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(6e6f45ac-80ed-41f2-b9b8-94e60a1656d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.802980 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.995314 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.995376 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.995545 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncfhfbhc9hc7h56h65ch5d7h5c8h78h588h5ch5d8h56fh585h556h5b5h5bh58ch548h575h644h59fh68fh59hf7h59h87h585h557h668h5cch79q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mscfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-c86j2_openstack(5db7b388-c09e-441f-88db-13916a2b9208): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:23:36 crc kubenswrapper[4680]: E0126 16:23:36.998408 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-c86j2" podUID="5db7b388-c09e-441f-88db-13916a2b9208" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.023479 4680 generic.go:334] "Generic (PLEG): container finished" podID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerID="c160b6b8c6c03e7fa7319eee8eb7a32c9e660c05df07f2080a47f02886c9a5f7" exitCode=0 Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.023520 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" event={"ID":"8a0254d2-59ab-4ccd-93b8-246ee618bbf3","Type":"ContainerDied","Data":"c160b6b8c6c03e7fa7319eee8eb7a32c9e660c05df07f2080a47f02886c9a5f7"} Jan 26 16:23:37 crc kubenswrapper[4680]: E0126 16:23:37.024827 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" Jan 26 16:23:37 crc kubenswrapper[4680]: E0126 16:23:37.025392 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovn-controller-c86j2" podUID="5db7b388-c09e-441f-88db-13916a2b9208" Jan 26 16:23:37 crc kubenswrapper[4680]: E0126 16:23:37.227192 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:37 crc kubenswrapper[4680]: E0126 16:23:37.227249 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:23:37 crc kubenswrapper[4680]: E0126 16:23:37.227425 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:c3923531bcda0b0811b2d5053f189beb,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h5b9hf9h9dh5bfh576h7chdh554h677h77h577h564h97h6fh76h579h546h9h5d7h557h57ch5f8h5b7h5f7h6h694h64h596hdch78h685q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94bc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(cdcb3cbb-b842-443f-9a47-749970d13f36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.712583 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.844805 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-config\") pod \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.844921 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-dns-svc\") pod \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.844992 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbpb9\" (UniqueName: \"kubernetes.io/projected/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-kube-api-access-jbpb9\") pod \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\" (UID: \"8a0254d2-59ab-4ccd-93b8-246ee618bbf3\") " Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.848938 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-kube-api-access-jbpb9" (OuterVolumeSpecName: "kube-api-access-jbpb9") pod "8a0254d2-59ab-4ccd-93b8-246ee618bbf3" (UID: "8a0254d2-59ab-4ccd-93b8-246ee618bbf3"). InnerVolumeSpecName "kube-api-access-jbpb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.875762 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a0254d2-59ab-4ccd-93b8-246ee618bbf3" (UID: "8a0254d2-59ab-4ccd-93b8-246ee618bbf3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.876651 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-config" (OuterVolumeSpecName: "config") pod "8a0254d2-59ab-4ccd-93b8-246ee618bbf3" (UID: "8a0254d2-59ab-4ccd-93b8-246ee618bbf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.946801 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.946829 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:37 crc kubenswrapper[4680]: I0126 16:23:37.946840 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbpb9\" (UniqueName: \"kubernetes.io/projected/8a0254d2-59ab-4ccd-93b8-246ee618bbf3-kube-api-access-jbpb9\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:38 crc kubenswrapper[4680]: I0126 16:23:38.032617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" event={"ID":"8a0254d2-59ab-4ccd-93b8-246ee618bbf3","Type":"ContainerDied","Data":"2e2e9bd3ee18ff3530c88689ee53d880dc7e5af48ecff895a1ed9d2c0c101049"} Jan 26 16:23:38 crc kubenswrapper[4680]: I0126 16:23:38.032657 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc87b5b9-5wjrd" Jan 26 16:23:38 crc kubenswrapper[4680]: I0126 16:23:38.032673 4680 scope.go:117] "RemoveContainer" containerID="c160b6b8c6c03e7fa7319eee8eb7a32c9e660c05df07f2080a47f02886c9a5f7" Jan 26 16:23:38 crc kubenswrapper[4680]: I0126 16:23:38.068625 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fc87b5b9-5wjrd"] Jan 26 16:23:38 crc kubenswrapper[4680]: I0126 16:23:38.075173 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fc87b5b9-5wjrd"] Jan 26 16:23:38 crc kubenswrapper[4680]: I0126 16:23:38.140413 4680 scope.go:117] "RemoveContainer" containerID="39691f5600e925638742440d0a5604bdc6d25a337109958b8b4c51200e0bde43" Jan 26 16:23:38 crc kubenswrapper[4680]: E0126 16:23:38.445704 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 16:23:38 crc kubenswrapper[4680]: E0126 16:23:38.445978 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 16:23:38 crc kubenswrapper[4680]: E0126 16:23:38.446110 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ct57d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(22f475ff-bcba-4cdd-a6ed-62be26882b42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:23:38 crc kubenswrapper[4680]: E0126 16:23:38.447330 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.042676 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c5f856a6-fa8d-4fd6-8f69-a13b165488b4","Type":"ContainerStarted","Data":"d9eec15ca576b264ef1aaf00b38a09c0e31ae323908a8c1ab02a3f3fb04b7b72"} Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.045757 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6476c77-06ae-4747-900e-41566a6063ca","Type":"ContainerStarted","Data":"8be53b2b501145e133945a8cb18c5cf46a1757edda815ecae720db055329f8ca"} Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.048730 4680 generic.go:334] "Generic (PLEG): container finished" podID="19558e19-d16d-437a-87fd-2d02181963c8" containerID="94d6c4e2882c428a98a958d39c954e13e0b17173cabfb7e48d1057b1eabca7cf" exitCode=0 Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.049133 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f49hh" event={"ID":"19558e19-d16d-437a-87fd-2d02181963c8","Type":"ContainerDied","Data":"94d6c4e2882c428a98a958d39c954e13e0b17173cabfb7e48d1057b1eabca7cf"} Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.051098 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"85ecf947-885e-418d-871c-bbef3a8353fe","Type":"ContainerStarted","Data":"f5948122402a349cfcc7d6d24450d01e51e15d40d837e1c8d120094672a62ba3"} Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.051322 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 16:23:39 crc kubenswrapper[4680]: E0126 16:23:39.052729 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.086640 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.592588455 podStartE2EDuration="26.08662233s" podCreationTimestamp="2026-01-26 16:23:13 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.092577186 +0000 UTC m=+1079.253849455" lastFinishedPulling="2026-01-26 16:23:37.586611061 +0000 UTC m=+1092.747883330" observedRunningTime="2026-01-26 16:23:39.086525568 +0000 UTC m=+1094.247797847" watchObservedRunningTime="2026-01-26 16:23:39.08662233 +0000 UTC m=+1094.247894599" Jan 26 16:23:39 crc kubenswrapper[4680]: I0126 16:23:39.177455 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" path="/var/lib/kubelet/pods/8a0254d2-59ab-4ccd-93b8-246ee618bbf3/volumes" Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.059316 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b7b1e0b-5218-426e-aca1-76d49633811c","Type":"ContainerStarted","Data":"7054a2cc380f039ebb9edb2c5103ef606ae60293f81d2038e21f08e9df3efbc5"} Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.063950 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad417dd7-c38c-4934-a895-d0253bb03494","Type":"ContainerStarted","Data":"84ffc9794e476e25f8d2a669fe751a60e111aa3beb943ac132db59158c8a2961"} Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.069910 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f49hh" event={"ID":"19558e19-d16d-437a-87fd-2d02181963c8","Type":"ContainerStarted","Data":"e270f25df388e85f42db747c7f8d541cdc57a24c5b1fb7c815fff0fcbdaab66d"} Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.069943 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.069957 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f49hh" event={"ID":"19558e19-d16d-437a-87fd-2d02181963c8","Type":"ContainerStarted","Data":"4885008d5699ee5aebc3a55b3bc18cb8d1d6e7c1f6bc9a474f732b64006b09d0"} Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.070565 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:23:40 crc kubenswrapper[4680]: I0126 16:23:40.106610 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f49hh" podStartSLOduration=9.449685409 podStartE2EDuration="22.106591247s" podCreationTimestamp="2026-01-26 16:23:18 +0000 UTC" firstStartedPulling="2026-01-26 16:23:25.065196958 +0000 UTC m=+1080.226469227" lastFinishedPulling="2026-01-26 16:23:37.722102796 +0000 UTC m=+1092.883375065" observedRunningTime="2026-01-26 16:23:40.098342693 +0000 UTC m=+1095.259614962" watchObservedRunningTime="2026-01-26 16:23:40.106591247 +0000 UTC m=+1095.267863526" Jan 26 16:23:41 crc kubenswrapper[4680]: E0126 16:23:41.542197 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="cdcb3cbb-b842-443f-9a47-749970d13f36" Jan 26 16:23:42 crc kubenswrapper[4680]: I0126 16:23:42.087804 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cdcb3cbb-b842-443f-9a47-749970d13f36","Type":"ContainerStarted","Data":"ed05cdf8144313123052e4e2355d8c46bce96d9c2b1960a2ccff5b7bb2a3e748"} Jan 26 16:23:42 crc kubenswrapper[4680]: E0126 16:23:42.089790 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="cdcb3cbb-b842-443f-9a47-749970d13f36" Jan 26 16:23:42 crc kubenswrapper[4680]: I0126 16:23:42.092033 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c5f856a6-fa8d-4fd6-8f69-a13b165488b4","Type":"ContainerStarted","Data":"e81311fac03252760d7385c27423e056022d9d49f579c4e9e26a3e3856e3e2f7"} Jan 26 16:23:42 crc kubenswrapper[4680]: I0126 16:23:42.096255 4680 generic.go:334] "Generic (PLEG): container finished" podID="e6476c77-06ae-4747-900e-41566a6063ca" containerID="8be53b2b501145e133945a8cb18c5cf46a1757edda815ecae720db055329f8ca" exitCode=0 Jan 26 16:23:42 crc kubenswrapper[4680]: I0126 16:23:42.096345 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6476c77-06ae-4747-900e-41566a6063ca","Type":"ContainerDied","Data":"8be53b2b501145e133945a8cb18c5cf46a1757edda815ecae720db055329f8ca"} Jan 26 16:23:42 crc kubenswrapper[4680]: I0126 16:23:42.958315 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:43 crc kubenswrapper[4680]: I0126 16:23:43.108112 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6476c77-06ae-4747-900e-41566a6063ca","Type":"ContainerStarted","Data":"8d10596bbf1e287a52ff4e776d775513fc0da2ecfc17ba48603d1394eab1cc80"} Jan 26 16:23:43 crc kubenswrapper[4680]: E0126 16:23:43.109897 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="cdcb3cbb-b842-443f-9a47-749970d13f36" Jan 26 16:23:43 crc kubenswrapper[4680]: I0126 16:23:43.127773 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.764941396 podStartE2EDuration="22.127753657s" podCreationTimestamp="2026-01-26 16:23:21 +0000 UTC" firstStartedPulling="2026-01-26 16:23:25.013576083 +0000 UTC m=+1080.174848352" lastFinishedPulling="2026-01-26 16:23:41.376388344 +0000 UTC m=+1096.537660613" observedRunningTime="2026-01-26 16:23:42.173605069 +0000 UTC m=+1097.334877338" watchObservedRunningTime="2026-01-26 16:23:43.127753657 +0000 UTC m=+1098.289025926" Jan 26 16:23:43 crc kubenswrapper[4680]: I0126 16:23:43.155178 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.468939543 podStartE2EDuration="33.155155214s" podCreationTimestamp="2026-01-26 16:23:10 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.918047271 +0000 UTC m=+1080.079319540" lastFinishedPulling="2026-01-26 16:23:37.604262942 +0000 UTC m=+1092.765535211" observedRunningTime="2026-01-26 16:23:43.150953855 +0000 UTC m=+1098.312226174" watchObservedRunningTime="2026-01-26 16:23:43.155155214 +0000 UTC m=+1098.316427493" Jan 26 16:23:43 crc kubenswrapper[4680]: I0126 16:23:43.513154 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 16:23:43 crc kubenswrapper[4680]: I0126 16:23:43.958189 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:43 crc kubenswrapper[4680]: I0126 16:23:43.990602 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.152541 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.416498 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55647579d9-2j8l2"] Jan 26 16:23:44 crc kubenswrapper[4680]: E0126 16:23:44.417417 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerName="dnsmasq-dns" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.417492 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerName="dnsmasq-dns" Jan 26 16:23:44 crc kubenswrapper[4680]: E0126 16:23:44.417560 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerName="init" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.417611 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerName="init" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.417807 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0254d2-59ab-4ccd-93b8-246ee618bbf3" containerName="dnsmasq-dns" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.418595 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.424222 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.465799 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55647579d9-2j8l2"] Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.561206 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.561493 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-dns-svc\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.561621 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864z9\" (UniqueName: \"kubernetes.io/projected/fc614e9a-7b44-43c5-b685-19a3df189bf5-kube-api-access-864z9\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.561737 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-ovsdbserver-sb\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.650970 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-snrmj"] Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.652361 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.654845 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.662236 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-snrmj"] Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.663317 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-ovsdbserver-sb\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.664235 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.664846 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-dns-svc\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.665662 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864z9\" (UniqueName: \"kubernetes.io/projected/fc614e9a-7b44-43c5-b685-19a3df189bf5-kube-api-access-864z9\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.664055 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-ovsdbserver-sb\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.665571 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-dns-svc\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.664815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.718978 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864z9\" (UniqueName: \"kubernetes.io/projected/fc614e9a-7b44-43c5-b685-19a3df189bf5-kube-api-access-864z9\") pod \"dnsmasq-dns-55647579d9-2j8l2\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.737730 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.767182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-combined-ca-bundle\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.767531 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-ovn-rundir\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.767726 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpj24\" (UniqueName: \"kubernetes.io/projected/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-kube-api-access-kpj24\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.767868 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-ovs-rundir\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.767994 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.768124 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-config\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.869184 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-ovn-rundir\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.869515 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpj24\" (UniqueName: \"kubernetes.io/projected/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-kube-api-access-kpj24\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.869533 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-ovs-rundir\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.869556 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.869573 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-config\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.869660 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-combined-ca-bundle\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.870489 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-ovs-rundir\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.870564 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-ovn-rundir\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.873663 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-config\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.889785 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-combined-ca-bundle\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.904643 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpj24\" (UniqueName: \"kubernetes.io/projected/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-kube-api-access-kpj24\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.905490 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad3c8966-d275-44f1-b3e6-d47f0a2ab14b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-snrmj\" (UID: \"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b\") " pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.968732 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-snrmj" Jan 26 16:23:44 crc kubenswrapper[4680]: I0126 16:23:44.998054 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55647579d9-2j8l2"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.030782 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.038903 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.044140 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.065164 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.178787 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-config\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.178832 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52bn\" (UniqueName: \"kubernetes.io/projected/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-kube-api-access-c52bn\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.178874 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.178936 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-dns-svc\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.178969 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.280263 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-dns-svc\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.280591 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.280652 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-config\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.280715 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52bn\" (UniqueName: \"kubernetes.io/projected/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-kube-api-access-c52bn\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.280775 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.284265 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.284667 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.285328 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-dns-svc\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.285815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-config\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.301586 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52bn\" (UniqueName: \"kubernetes.io/projected/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-kube-api-access-c52bn\") pod \"dnsmasq-dns-5bb4d5f6cf-nwdlr\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.381862 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.382038 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55647579d9-2j8l2"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.532826 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-snrmj"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.546199 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.574519 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f78ffb8f-94fw6"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.581557 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: W0126 16:23:45.583378 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3c8966_d275_44f1_b3e6_d47f0a2ab14b.slice/crio-d395aaa9b111a60b676f6faf3dc7220d124d8076d856cc1fcf6ddbdb6a514f22 WatchSource:0}: Error finding container d395aaa9b111a60b676f6faf3dc7220d124d8076d856cc1fcf6ddbdb6a514f22: Status 404 returned error can't find the container with id d395aaa9b111a60b676f6faf3dc7220d124d8076d856cc1fcf6ddbdb6a514f22 Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.597909 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f78ffb8f-94fw6"] Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.713414 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-sb\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.713886 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-config\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.713973 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-nb\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.714012 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnthc\" (UniqueName: \"kubernetes.io/projected/977c3806-6d06-4f71-9035-67d813348eb5-kube-api-access-hnthc\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.714061 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-dns-svc\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.815738 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-nb\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.815788 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnthc\" (UniqueName: \"kubernetes.io/projected/977c3806-6d06-4f71-9035-67d813348eb5-kube-api-access-hnthc\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.815817 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-dns-svc\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.815894 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-sb\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.815911 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-config\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.816860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-dns-svc\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.817525 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-config\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.817665 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-sb\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.819111 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-nb\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.871948 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnthc\" (UniqueName: \"kubernetes.io/projected/977c3806-6d06-4f71-9035-67d813348eb5-kube-api-access-hnthc\") pod \"dnsmasq-dns-74f78ffb8f-94fw6\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:45 crc kubenswrapper[4680]: I0126 16:23:45.909511 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.137088 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-snrmj" event={"ID":"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b","Type":"ContainerStarted","Data":"e2ee326ee362d72a0f3611d1166e5905997939080b4fe059f3f69d7f0a9b1d58"} Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.137150 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-snrmj" event={"ID":"ad3c8966-d275-44f1-b3e6-d47f0a2ab14b","Type":"ContainerStarted","Data":"d395aaa9b111a60b676f6faf3dc7220d124d8076d856cc1fcf6ddbdb6a514f22"} Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.141120 4680 generic.go:334] "Generic (PLEG): container finished" podID="fc614e9a-7b44-43c5-b685-19a3df189bf5" containerID="e1cdee92c70ccb7eda3855d8a147397df740d5695b8b393f860065f9dfa37980" exitCode=0 Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.141229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" event={"ID":"fc614e9a-7b44-43c5-b685-19a3df189bf5","Type":"ContainerDied","Data":"e1cdee92c70ccb7eda3855d8a147397df740d5695b8b393f860065f9dfa37980"} Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.141293 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" event={"ID":"fc614e9a-7b44-43c5-b685-19a3df189bf5","Type":"ContainerStarted","Data":"b6a5613984d5df63af6a467f5f6b3dd24f1f7ee417306b68eb2294ef0f1cf676"} Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.154750 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-snrmj" podStartSLOduration=2.154731281 podStartE2EDuration="2.154731281s" podCreationTimestamp="2026-01-26 16:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:23:46.152614391 +0000 UTC m=+1101.313886660" watchObservedRunningTime="2026-01-26 16:23:46.154731281 +0000 UTC m=+1101.316003550" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.221501 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr"] Jan 26 16:23:46 crc kubenswrapper[4680]: W0126 16:23:46.422817 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod977c3806_6d06_4f71_9035_67d813348eb5.slice/crio-28dc3984f0c2ef8f4455c60ca303c66a8aa73a0509274dd18149d26770d9fd40 WatchSource:0}: Error finding container 28dc3984f0c2ef8f4455c60ca303c66a8aa73a0509274dd18149d26770d9fd40: Status 404 returned error can't find the container with id 28dc3984f0c2ef8f4455c60ca303c66a8aa73a0509274dd18149d26770d9fd40 Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.432825 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f78ffb8f-94fw6"] Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.441986 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.535981 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-864z9\" (UniqueName: \"kubernetes.io/projected/fc614e9a-7b44-43c5-b685-19a3df189bf5-kube-api-access-864z9\") pod \"fc614e9a-7b44-43c5-b685-19a3df189bf5\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.536036 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config\") pod \"fc614e9a-7b44-43c5-b685-19a3df189bf5\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.536108 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-dns-svc\") pod \"fc614e9a-7b44-43c5-b685-19a3df189bf5\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.536218 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-ovsdbserver-sb\") pod \"fc614e9a-7b44-43c5-b685-19a3df189bf5\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.542402 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc614e9a-7b44-43c5-b685-19a3df189bf5-kube-api-access-864z9" (OuterVolumeSpecName: "kube-api-access-864z9") pod "fc614e9a-7b44-43c5-b685-19a3df189bf5" (UID: "fc614e9a-7b44-43c5-b685-19a3df189bf5"). InnerVolumeSpecName "kube-api-access-864z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.568679 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc614e9a-7b44-43c5-b685-19a3df189bf5" (UID: "fc614e9a-7b44-43c5-b685-19a3df189bf5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:46 crc kubenswrapper[4680]: E0126 16:23:46.577209 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config podName:fc614e9a-7b44-43c5-b685-19a3df189bf5 nodeName:}" failed. No retries permitted until 2026-01-26 16:23:47.07718605 +0000 UTC m=+1102.238458319 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config") pod "fc614e9a-7b44-43c5-b685-19a3df189bf5" (UID: "fc614e9a-7b44-43c5-b685-19a3df189bf5") : error deleting /var/lib/kubelet/pods/fc614e9a-7b44-43c5-b685-19a3df189bf5/volume-subpaths: remove /var/lib/kubelet/pods/fc614e9a-7b44-43c5-b685-19a3df189bf5/volume-subpaths: no such file or directory Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.577449 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc614e9a-7b44-43c5-b685-19a3df189bf5" (UID: "fc614e9a-7b44-43c5-b685-19a3df189bf5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.638584 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-864z9\" (UniqueName: \"kubernetes.io/projected/fc614e9a-7b44-43c5-b685-19a3df189bf5-kube-api-access-864z9\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.639369 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.639417 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.707938 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 16:23:46 crc kubenswrapper[4680]: E0126 16:23:46.708320 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc614e9a-7b44-43c5-b685-19a3df189bf5" containerName="init" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.708342 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc614e9a-7b44-43c5-b685-19a3df189bf5" containerName="init" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.708535 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc614e9a-7b44-43c5-b685-19a3df189bf5" containerName="init" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.714665 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.717269 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.717937 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hdwlm" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.717999 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.718521 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.734088 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.842693 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdgk\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-kube-api-access-hjdgk\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.842739 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.842825 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.842886 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101eb26d-100c-478e-bb84-dcc69e480c11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.842918 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/101eb26d-100c-478e-bb84-dcc69e480c11-cache\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.842934 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/101eb26d-100c-478e-bb84-dcc69e480c11-lock\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.944760 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdgk\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-kube-api-access-hjdgk\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.944809 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.944850 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.944905 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101eb26d-100c-478e-bb84-dcc69e480c11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.944930 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/101eb26d-100c-478e-bb84-dcc69e480c11-cache\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.944944 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/101eb26d-100c-478e-bb84-dcc69e480c11-lock\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.945133 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.945433 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/101eb26d-100c-478e-bb84-dcc69e480c11-lock\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: E0126 16:23:46.945432 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 16:23:46 crc kubenswrapper[4680]: E0126 16:23:46.945483 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.945518 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/101eb26d-100c-478e-bb84-dcc69e480c11-cache\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: E0126 16:23:46.945530 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift podName:101eb26d-100c-478e-bb84-dcc69e480c11 nodeName:}" failed. No retries permitted until 2026-01-26 16:23:47.445512813 +0000 UTC m=+1102.606785082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift") pod "swift-storage-0" (UID: "101eb26d-100c-478e-bb84-dcc69e480c11") : configmap "swift-ring-files" not found Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.950570 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101eb26d-100c-478e-bb84-dcc69e480c11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.968361 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdgk\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-kube-api-access-hjdgk\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.968865 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.998435 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-c2zhh"] Jan 26 16:23:46 crc kubenswrapper[4680]: I0126 16:23:46.999927 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.004399 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.004571 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.004683 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.014734 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c2zhh"] Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.046817 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-swiftconf\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.047161 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-combined-ca-bundle\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.047372 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-etc-swift\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.047487 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-ring-data-devices\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.047589 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7kd\" (UniqueName: \"kubernetes.io/projected/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-kube-api-access-hq7kd\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.047699 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-dispersionconf\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.047825 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-scripts\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150201 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config\") pod \"fc614e9a-7b44-43c5-b685-19a3df189bf5\" (UID: \"fc614e9a-7b44-43c5-b685-19a3df189bf5\") " Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-etc-swift\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150676 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-ring-data-devices\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150708 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7kd\" (UniqueName: \"kubernetes.io/projected/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-kube-api-access-hq7kd\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150733 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-dispersionconf\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150761 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-scripts\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150828 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-swiftconf\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.150856 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-combined-ca-bundle\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.153626 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-etc-swift\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.154418 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-ring-data-devices\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.154637 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config" (OuterVolumeSpecName: "config") pod "fc614e9a-7b44-43c5-b685-19a3df189bf5" (UID: "fc614e9a-7b44-43c5-b685-19a3df189bf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.155200 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-scripts\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.159240 4680 generic.go:334] "Generic (PLEG): container finished" podID="977c3806-6d06-4f71-9035-67d813348eb5" containerID="d0b34e3ee7b7fca85f3f2640fb94b257ee1118b771aea7a004f9839357b99221" exitCode=0 Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.159402 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" event={"ID":"977c3806-6d06-4f71-9035-67d813348eb5","Type":"ContainerDied","Data":"d0b34e3ee7b7fca85f3f2640fb94b257ee1118b771aea7a004f9839357b99221"} Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.159498 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" event={"ID":"977c3806-6d06-4f71-9035-67d813348eb5","Type":"ContainerStarted","Data":"28dc3984f0c2ef8f4455c60ca303c66a8aa73a0509274dd18149d26770d9fd40"} Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.161714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" event={"ID":"fc614e9a-7b44-43c5-b685-19a3df189bf5","Type":"ContainerDied","Data":"b6a5613984d5df63af6a467f5f6b3dd24f1f7ee417306b68eb2294ef0f1cf676"} Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.161767 4680 scope.go:117] "RemoveContainer" containerID="e1cdee92c70ccb7eda3855d8a147397df740d5695b8b393f860065f9dfa37980" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.161882 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55647579d9-2j8l2" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.164114 4680 generic.go:334] "Generic (PLEG): container finished" podID="440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" containerID="e1b9cead8a79044789c2603342d67627d0aefd88444f6466dfe8465e1ad35acf" exitCode=0 Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.164595 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" event={"ID":"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7","Type":"ContainerDied","Data":"e1b9cead8a79044789c2603342d67627d0aefd88444f6466dfe8465e1ad35acf"} Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.164620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" event={"ID":"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7","Type":"ContainerStarted","Data":"a45c9ff635c515fd710bb8c1213d3b154cb0cba410548806d56d1fe04fbb6612"} Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.168969 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-combined-ca-bundle\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.179193 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-dispersionconf\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.182607 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-swiftconf\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.183365 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7kd\" (UniqueName: \"kubernetes.io/projected/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-kube-api-access-hq7kd\") pod \"swift-ring-rebalance-c2zhh\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.260500 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc614e9a-7b44-43c5-b685-19a3df189bf5-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.275443 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55647579d9-2j8l2"] Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.280180 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55647579d9-2j8l2"] Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.325558 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.463536 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:47 crc kubenswrapper[4680]: E0126 16:23:47.463747 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 16:23:47 crc kubenswrapper[4680]: E0126 16:23:47.463761 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 16:23:47 crc kubenswrapper[4680]: E0126 16:23:47.463827 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift podName:101eb26d-100c-478e-bb84-dcc69e480c11 nodeName:}" failed. No retries permitted until 2026-01-26 16:23:48.463812393 +0000 UTC m=+1103.625084662 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift") pod "swift-storage-0" (UID: "101eb26d-100c-478e-bb84-dcc69e480c11") : configmap "swift-ring-files" not found Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.517350 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.567119 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-config\") pod \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.567173 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-sb\") pod \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.567216 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-nb\") pod \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.567266 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c52bn\" (UniqueName: \"kubernetes.io/projected/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-kube-api-access-c52bn\") pod \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.567309 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-dns-svc\") pod \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\" (UID: \"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7\") " Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.583184 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-kube-api-access-c52bn" (OuterVolumeSpecName: "kube-api-access-c52bn") pod "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" (UID: "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7"). InnerVolumeSpecName "kube-api-access-c52bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.590052 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" (UID: "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.594838 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" (UID: "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.604785 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-config" (OuterVolumeSpecName: "config") pod "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" (UID: "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.615871 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" (UID: "440afb5f-3e37-47ab-ab8a-f38fef6c0fe7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.667968 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c52bn\" (UniqueName: \"kubernetes.io/projected/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-kube-api-access-c52bn\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.668013 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.668024 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.668032 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.668039 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:47 crc kubenswrapper[4680]: I0126 16:23:47.905568 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c2zhh"] Jan 26 16:23:47 crc kubenswrapper[4680]: W0126 16:23:47.909052 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c79b7df_0dfd_42eb_bb02_09bf51f250e2.slice/crio-1fea89628ae2c38b4634310a8fb4cc99b39f18bcbb8d57e2b88b048348072dc0 WatchSource:0}: Error finding container 1fea89628ae2c38b4634310a8fb4cc99b39f18bcbb8d57e2b88b048348072dc0: Status 404 returned error can't find the container with id 1fea89628ae2c38b4634310a8fb4cc99b39f18bcbb8d57e2b88b048348072dc0 Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.181183 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2zhh" event={"ID":"4c79b7df-0dfd-42eb-bb02-09bf51f250e2","Type":"ContainerStarted","Data":"1fea89628ae2c38b4634310a8fb4cc99b39f18bcbb8d57e2b88b048348072dc0"} Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.187865 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" event={"ID":"977c3806-6d06-4f71-9035-67d813348eb5","Type":"ContainerStarted","Data":"77e0e3f3ab0ff109f7218177effc95172fa2197486b92692fe7d37505e3858af"} Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.188276 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.200189 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" event={"ID":"440afb5f-3e37-47ab-ab8a-f38fef6c0fe7","Type":"ContainerDied","Data":"a45c9ff635c515fd710bb8c1213d3b154cb0cba410548806d56d1fe04fbb6612"} Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.200230 4680 scope.go:117] "RemoveContainer" containerID="e1b9cead8a79044789c2603342d67627d0aefd88444f6466dfe8465e1ad35acf" Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.200347 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr" Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.254868 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" podStartSLOduration=3.254849622 podStartE2EDuration="3.254849622s" podCreationTimestamp="2026-01-26 16:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:23:48.213991173 +0000 UTC m=+1103.375263442" watchObservedRunningTime="2026-01-26 16:23:48.254849622 +0000 UTC m=+1103.416121891" Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.262776 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr"] Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.269198 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bb4d5f6cf-nwdlr"] Jan 26 16:23:48 crc kubenswrapper[4680]: I0126 16:23:48.482123 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:48 crc kubenswrapper[4680]: E0126 16:23:48.482335 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 16:23:48 crc kubenswrapper[4680]: E0126 16:23:48.482499 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 16:23:48 crc kubenswrapper[4680]: E0126 16:23:48.482566 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift podName:101eb26d-100c-478e-bb84-dcc69e480c11 nodeName:}" failed. No retries permitted until 2026-01-26 16:23:50.482533134 +0000 UTC m=+1105.643805403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift") pod "swift-storage-0" (UID: "101eb26d-100c-478e-bb84-dcc69e480c11") : configmap "swift-ring-files" not found Jan 26 16:23:49 crc kubenswrapper[4680]: I0126 16:23:49.185484 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" path="/var/lib/kubelet/pods/440afb5f-3e37-47ab-ab8a-f38fef6c0fe7/volumes" Jan 26 16:23:49 crc kubenswrapper[4680]: I0126 16:23:49.186341 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc614e9a-7b44-43c5-b685-19a3df189bf5" path="/var/lib/kubelet/pods/fc614e9a-7b44-43c5-b685-19a3df189bf5/volumes" Jan 26 16:23:49 crc kubenswrapper[4680]: I0126 16:23:49.212407 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c86j2" event={"ID":"5db7b388-c09e-441f-88db-13916a2b9208","Type":"ContainerStarted","Data":"18dfa1bf15432d97583b76ea301572e7cec6d0818a81636a5734bdc6067c8a1a"} Jan 26 16:23:49 crc kubenswrapper[4680]: I0126 16:23:49.212712 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-c86j2" Jan 26 16:23:49 crc kubenswrapper[4680]: I0126 16:23:49.242463 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c86j2" podStartSLOduration=7.746859873 podStartE2EDuration="31.24244214s" podCreationTimestamp="2026-01-26 16:23:18 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.82005338 +0000 UTC m=+1079.981325649" lastFinishedPulling="2026-01-26 16:23:48.315635647 +0000 UTC m=+1103.476907916" observedRunningTime="2026-01-26 16:23:49.234242897 +0000 UTC m=+1104.395515206" watchObservedRunningTime="2026-01-26 16:23:49.24244214 +0000 UTC m=+1104.403714409" Jan 26 16:23:50 crc kubenswrapper[4680]: I0126 16:23:50.521932 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:50 crc kubenswrapper[4680]: E0126 16:23:50.522151 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 16:23:50 crc kubenswrapper[4680]: E0126 16:23:50.522504 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 16:23:50 crc kubenswrapper[4680]: E0126 16:23:50.522567 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift podName:101eb26d-100c-478e-bb84-dcc69e480c11 nodeName:}" failed. No retries permitted until 2026-01-26 16:23:54.522546038 +0000 UTC m=+1109.683818317 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift") pod "swift-storage-0" (UID: "101eb26d-100c-478e-bb84-dcc69e480c11") : configmap "swift-ring-files" not found Jan 26 16:23:51 crc kubenswrapper[4680]: I0126 16:23:51.640656 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 16:23:51 crc kubenswrapper[4680]: I0126 16:23:51.640959 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 16:23:51 crc kubenswrapper[4680]: I0126 16:23:51.723347 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.238791 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22f475ff-bcba-4cdd-a6ed-62be26882b42","Type":"ContainerStarted","Data":"5209e212e2204abe1f8ecc4fac4c6eb44ed5c43062b98e5c67f2eb3e5e3e6da0"} Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.239366 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.240215 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4","Type":"ContainerStarted","Data":"25f7c36e2bddefeb02f9af01dd7ea1c96b7d9f77d92fb7094ee8088e307a2234"} Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.241830 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2zhh" event={"ID":"4c79b7df-0dfd-42eb-bb02-09bf51f250e2","Type":"ContainerStarted","Data":"176a6417bfef7e3e75818fd3f52ff758c3cd01e3b6945e996e8aea5f6ba2c07a"} Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.262113 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.548820212999999 podStartE2EDuration="37.262090336s" podCreationTimestamp="2026-01-26 16:23:15 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.917605669 +0000 UTC m=+1080.078877938" lastFinishedPulling="2026-01-26 16:23:51.630875792 +0000 UTC m=+1106.792148061" observedRunningTime="2026-01-26 16:23:52.258845674 +0000 UTC m=+1107.420117943" watchObservedRunningTime="2026-01-26 16:23:52.262090336 +0000 UTC m=+1107.423362605" Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.297971 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-c2zhh" podStartSLOduration=2.573512756 podStartE2EDuration="6.297949984s" podCreationTimestamp="2026-01-26 16:23:46 +0000 UTC" firstStartedPulling="2026-01-26 16:23:47.910910091 +0000 UTC m=+1103.072182350" lastFinishedPulling="2026-01-26 16:23:51.635347309 +0000 UTC m=+1106.796619578" observedRunningTime="2026-01-26 16:23:52.296356859 +0000 UTC m=+1107.457629128" watchObservedRunningTime="2026-01-26 16:23:52.297949984 +0000 UTC m=+1107.459222253" Jan 26 16:23:52 crc kubenswrapper[4680]: I0126 16:23:52.314534 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.258536 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7dmt5"] Jan 26 16:23:53 crc kubenswrapper[4680]: E0126 16:23:53.259296 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" containerName="init" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.259316 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" containerName="init" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.259543 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="440afb5f-3e37-47ab-ab8a-f38fef6c0fe7" containerName="init" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.260165 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.269045 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7605-account-create-update-llm8x"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.273430 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.276088 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.279103 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7605-account-create-update-llm8x"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.321010 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7dmt5"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.377032 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a8cc63a-ae3e-494d-b906-9c2d31441be3-operator-scripts\") pod \"keystone-db-create-7dmt5\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.377102 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vxb\" (UniqueName: \"kubernetes.io/projected/7a8cc63a-ae3e-494d-b906-9c2d31441be3-kube-api-access-s5vxb\") pod \"keystone-db-create-7dmt5\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.377215 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ltv\" (UniqueName: \"kubernetes.io/projected/e8fdc0f7-213f-408f-9ae7-590b8e900e28-kube-api-access-r6ltv\") pod \"keystone-7605-account-create-update-llm8x\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.377245 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8fdc0f7-213f-408f-9ae7-590b8e900e28-operator-scripts\") pod \"keystone-7605-account-create-update-llm8x\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.438748 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s2hgk"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.439885 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.447356 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s2hgk"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.478201 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ltv\" (UniqueName: \"kubernetes.io/projected/e8fdc0f7-213f-408f-9ae7-590b8e900e28-kube-api-access-r6ltv\") pod \"keystone-7605-account-create-update-llm8x\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.478262 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8fdc0f7-213f-408f-9ae7-590b8e900e28-operator-scripts\") pod \"keystone-7605-account-create-update-llm8x\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.478418 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a8cc63a-ae3e-494d-b906-9c2d31441be3-operator-scripts\") pod \"keystone-db-create-7dmt5\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.478441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5vxb\" (UniqueName: \"kubernetes.io/projected/7a8cc63a-ae3e-494d-b906-9c2d31441be3-kube-api-access-s5vxb\") pod \"keystone-db-create-7dmt5\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.479048 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8fdc0f7-213f-408f-9ae7-590b8e900e28-operator-scripts\") pod \"keystone-7605-account-create-update-llm8x\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.479219 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a8cc63a-ae3e-494d-b906-9c2d31441be3-operator-scripts\") pod \"keystone-db-create-7dmt5\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.498019 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5vxb\" (UniqueName: \"kubernetes.io/projected/7a8cc63a-ae3e-494d-b906-9c2d31441be3-kube-api-access-s5vxb\") pod \"keystone-db-create-7dmt5\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.502469 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ltv\" (UniqueName: \"kubernetes.io/projected/e8fdc0f7-213f-408f-9ae7-590b8e900e28-kube-api-access-r6ltv\") pod \"keystone-7605-account-create-update-llm8x\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.550263 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fa50-account-create-update-vxdxg"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.555508 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.563634 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.577367 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.577055 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fa50-account-create-update-vxdxg"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.580473 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fdcb5d9-3066-4592-a0db-290c55aa87d6-operator-scripts\") pod \"placement-db-create-s2hgk\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.580574 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9kb\" (UniqueName: \"kubernetes.io/projected/4fdcb5d9-3066-4592-a0db-290c55aa87d6-kube-api-access-lx9kb\") pod \"placement-db-create-s2hgk\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.609509 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.682159 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zckn\" (UniqueName: \"kubernetes.io/projected/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-kube-api-access-5zckn\") pod \"placement-fa50-account-create-update-vxdxg\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.682222 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fdcb5d9-3066-4592-a0db-290c55aa87d6-operator-scripts\") pod \"placement-db-create-s2hgk\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.682245 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-operator-scripts\") pod \"placement-fa50-account-create-update-vxdxg\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.682283 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9kb\" (UniqueName: \"kubernetes.io/projected/4fdcb5d9-3066-4592-a0db-290c55aa87d6-kube-api-access-lx9kb\") pod \"placement-db-create-s2hgk\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.683267 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fdcb5d9-3066-4592-a0db-290c55aa87d6-operator-scripts\") pod \"placement-db-create-s2hgk\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.695315 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-525dd"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.696974 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.713748 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9kb\" (UniqueName: \"kubernetes.io/projected/4fdcb5d9-3066-4592-a0db-290c55aa87d6-kube-api-access-lx9kb\") pod \"placement-db-create-s2hgk\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.720306 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-525dd"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.757955 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.783673 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564x9\" (UniqueName: \"kubernetes.io/projected/6087753a-56bf-4286-9ec8-fe1ce34d08f7-kube-api-access-564x9\") pod \"glance-db-create-525dd\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.783799 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6087753a-56bf-4286-9ec8-fe1ce34d08f7-operator-scripts\") pod \"glance-db-create-525dd\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.783890 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zckn\" (UniqueName: \"kubernetes.io/projected/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-kube-api-access-5zckn\") pod \"placement-fa50-account-create-update-vxdxg\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.783925 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-operator-scripts\") pod \"placement-fa50-account-create-update-vxdxg\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.784851 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-operator-scripts\") pod \"placement-fa50-account-create-update-vxdxg\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.793932 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-73bf-account-create-update-clmpj"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.795807 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.802417 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-73bf-account-create-update-clmpj"] Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.805149 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.817893 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zckn\" (UniqueName: \"kubernetes.io/projected/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-kube-api-access-5zckn\") pod \"placement-fa50-account-create-update-vxdxg\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.884924 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-564x9\" (UniqueName: \"kubernetes.io/projected/6087753a-56bf-4286-9ec8-fe1ce34d08f7-kube-api-access-564x9\") pod \"glance-db-create-525dd\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.885047 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6087753a-56bf-4286-9ec8-fe1ce34d08f7-operator-scripts\") pod \"glance-db-create-525dd\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.885815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6087753a-56bf-4286-9ec8-fe1ce34d08f7-operator-scripts\") pod \"glance-db-create-525dd\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.885940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f54278-5c8d-45e7-9d36-127fff79e22a-operator-scripts\") pod \"glance-73bf-account-create-update-clmpj\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.886024 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6k2\" (UniqueName: \"kubernetes.io/projected/71f54278-5c8d-45e7-9d36-127fff79e22a-kube-api-access-nq6k2\") pod \"glance-73bf-account-create-update-clmpj\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.921710 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-564x9\" (UniqueName: \"kubernetes.io/projected/6087753a-56bf-4286-9ec8-fe1ce34d08f7-kube-api-access-564x9\") pod \"glance-db-create-525dd\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " pod="openstack/glance-db-create-525dd" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.971398 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.987287 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f54278-5c8d-45e7-9d36-127fff79e22a-operator-scripts\") pod \"glance-73bf-account-create-update-clmpj\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.987376 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6k2\" (UniqueName: \"kubernetes.io/projected/71f54278-5c8d-45e7-9d36-127fff79e22a-kube-api-access-nq6k2\") pod \"glance-73bf-account-create-update-clmpj\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:53 crc kubenswrapper[4680]: I0126 16:23:53.988943 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f54278-5c8d-45e7-9d36-127fff79e22a-operator-scripts\") pod \"glance-73bf-account-create-update-clmpj\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.016119 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6k2\" (UniqueName: \"kubernetes.io/projected/71f54278-5c8d-45e7-9d36-127fff79e22a-kube-api-access-nq6k2\") pod \"glance-73bf-account-create-update-clmpj\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.158182 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-525dd" Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.163807 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.232364 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7dmt5"] Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.264973 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7dmt5" event={"ID":"7a8cc63a-ae3e-494d-b906-9c2d31441be3","Type":"ContainerStarted","Data":"c64c21d094f4a811615c21f4c2b6d03eddda8f193dd382326a2fee5f7b2c1643"} Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.396409 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7605-account-create-update-llm8x"] Jan 26 16:23:54 crc kubenswrapper[4680]: W0126 16:23:54.417767 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8fdc0f7_213f_408f_9ae7_590b8e900e28.slice/crio-e8902c18cebffb2f624931cceaaffca4734869d11fba39a683188e6ef78dec06 WatchSource:0}: Error finding container e8902c18cebffb2f624931cceaaffca4734869d11fba39a683188e6ef78dec06: Status 404 returned error can't find the container with id e8902c18cebffb2f624931cceaaffca4734869d11fba39a683188e6ef78dec06 Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.535955 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s2hgk"] Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.603640 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:23:54 crc kubenswrapper[4680]: E0126 16:23:54.604060 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 16:23:54 crc kubenswrapper[4680]: E0126 16:23:54.604104 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 16:23:54 crc kubenswrapper[4680]: E0126 16:23:54.604184 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift podName:101eb26d-100c-478e-bb84-dcc69e480c11 nodeName:}" failed. No retries permitted until 2026-01-26 16:24:02.604140032 +0000 UTC m=+1117.765412301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift") pod "swift-storage-0" (UID: "101eb26d-100c-478e-bb84-dcc69e480c11") : configmap "swift-ring-files" not found Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.632990 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fa50-account-create-update-vxdxg"] Jan 26 16:23:54 crc kubenswrapper[4680]: W0126 16:23:54.666423 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode092ed22_18ed_48b1_9d0d_43b93d8a60c6.slice/crio-2d3624a92ce47f7612af77c48f385bd47c6f434827dda98b3323908906ae59a2 WatchSource:0}: Error finding container 2d3624a92ce47f7612af77c48f385bd47c6f434827dda98b3323908906ae59a2: Status 404 returned error can't find the container with id 2d3624a92ce47f7612af77c48f385bd47c6f434827dda98b3323908906ae59a2 Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.797862 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-525dd"] Jan 26 16:23:54 crc kubenswrapper[4680]: I0126 16:23:54.814972 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-73bf-account-create-update-clmpj"] Jan 26 16:23:54 crc kubenswrapper[4680]: W0126 16:23:54.816536 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6087753a_56bf_4286_9ec8_fe1ce34d08f7.slice/crio-285e942288254f9bb6b8a44da531e04dc35c2b8d196dbf4e73d172f36b4f69b0 WatchSource:0}: Error finding container 285e942288254f9bb6b8a44da531e04dc35c2b8d196dbf4e73d172f36b4f69b0: Status 404 returned error can't find the container with id 285e942288254f9bb6b8a44da531e04dc35c2b8d196dbf4e73d172f36b4f69b0 Jan 26 16:23:54 crc kubenswrapper[4680]: W0126 16:23:54.832146 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71f54278_5c8d_45e7_9d36_127fff79e22a.slice/crio-bf8d39519228da375c3c4f4e7794eefccf182e1bfc84d57c40d0b2901e429579 WatchSource:0}: Error finding container bf8d39519228da375c3c4f4e7794eefccf182e1bfc84d57c40d0b2901e429579: Status 404 returned error can't find the container with id bf8d39519228da375c3c4f4e7794eefccf182e1bfc84d57c40d0b2901e429579 Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.278442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-525dd" event={"ID":"6087753a-56bf-4286-9ec8-fe1ce34d08f7","Type":"ContainerStarted","Data":"285e942288254f9bb6b8a44da531e04dc35c2b8d196dbf4e73d172f36b4f69b0"} Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.281152 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-73bf-account-create-update-clmpj" event={"ID":"71f54278-5c8d-45e7-9d36-127fff79e22a","Type":"ContainerStarted","Data":"bf8d39519228da375c3c4f4e7794eefccf182e1bfc84d57c40d0b2901e429579"} Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.283450 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fa50-account-create-update-vxdxg" event={"ID":"e092ed22-18ed-48b1-9d0d-43b93d8a60c6","Type":"ContainerStarted","Data":"2d3624a92ce47f7612af77c48f385bd47c6f434827dda98b3323908906ae59a2"} Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.287785 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7605-account-create-update-llm8x" event={"ID":"e8fdc0f7-213f-408f-9ae7-590b8e900e28","Type":"ContainerStarted","Data":"e8902c18cebffb2f624931cceaaffca4734869d11fba39a683188e6ef78dec06"} Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.289657 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s2hgk" event={"ID":"4fdcb5d9-3066-4592-a0db-290c55aa87d6","Type":"ContainerStarted","Data":"63f40af307f21ea8509bac9b696174a53d506f4cde1c48c8d80c9c8397a63e78"} Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.913032 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.985972 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d4ff7b85-svzjv"] Jan 26 16:23:55 crc kubenswrapper[4680]: I0126 16:23:55.986450 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" podUID="662eacd6-016f-459a-806f-4bf940065b6a" containerName="dnsmasq-dns" containerID="cri-o://93a266f10df23be2dd520422f8354149c408ac104a2425cba95ea85885983787" gracePeriod=10 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.299050 4680 generic.go:334] "Generic (PLEG): container finished" podID="4fdcb5d9-3066-4592-a0db-290c55aa87d6" containerID="ce6d140f225f19194d0ced973fe88442985e00eb499ecc01139b47045112687d" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.299382 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s2hgk" event={"ID":"4fdcb5d9-3066-4592-a0db-290c55aa87d6","Type":"ContainerDied","Data":"ce6d140f225f19194d0ced973fe88442985e00eb499ecc01139b47045112687d"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.312118 4680 generic.go:334] "Generic (PLEG): container finished" podID="6087753a-56bf-4286-9ec8-fe1ce34d08f7" containerID="6f95f12d05f6e32e2eb66958edb1e8659a970014cb16e0b2e9c4112f51c0d068" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.312205 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-525dd" event={"ID":"6087753a-56bf-4286-9ec8-fe1ce34d08f7","Type":"ContainerDied","Data":"6f95f12d05f6e32e2eb66958edb1e8659a970014cb16e0b2e9c4112f51c0d068"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.313947 4680 generic.go:334] "Generic (PLEG): container finished" podID="71f54278-5c8d-45e7-9d36-127fff79e22a" containerID="b78535345c455f21ddb5be5ae69caff7f6ab249ef73ec9a19f4253430492adf8" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.314033 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-73bf-account-create-update-clmpj" event={"ID":"71f54278-5c8d-45e7-9d36-127fff79e22a","Type":"ContainerDied","Data":"b78535345c455f21ddb5be5ae69caff7f6ab249ef73ec9a19f4253430492adf8"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.316193 4680 generic.go:334] "Generic (PLEG): container finished" podID="662eacd6-016f-459a-806f-4bf940065b6a" containerID="93a266f10df23be2dd520422f8354149c408ac104a2425cba95ea85885983787" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.316260 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" event={"ID":"662eacd6-016f-459a-806f-4bf940065b6a","Type":"ContainerDied","Data":"93a266f10df23be2dd520422f8354149c408ac104a2425cba95ea85885983787"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.318268 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cdcb3cbb-b842-443f-9a47-749970d13f36","Type":"ContainerStarted","Data":"7f6dd9473c9828b6f4437a729a3f5788ff4eb4660b060bd743bc814b7e91c2ab"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.323288 4680 generic.go:334] "Generic (PLEG): container finished" podID="e092ed22-18ed-48b1-9d0d-43b93d8a60c6" containerID="2a2ccc8c9a928fbd6bb3d9820de33b0a71d11c9cb6683d0f6796750e392f4bc9" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.323351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fa50-account-create-update-vxdxg" event={"ID":"e092ed22-18ed-48b1-9d0d-43b93d8a60c6","Type":"ContainerDied","Data":"2a2ccc8c9a928fbd6bb3d9820de33b0a71d11c9cb6683d0f6796750e392f4bc9"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.325712 4680 generic.go:334] "Generic (PLEG): container finished" podID="7a8cc63a-ae3e-494d-b906-9c2d31441be3" containerID="1868f71e7bb0d2d4e14e86dfb4a1c0b739515cd1b5bb543e10b49138567d1d03" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.325758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7dmt5" event={"ID":"7a8cc63a-ae3e-494d-b906-9c2d31441be3","Type":"ContainerDied","Data":"1868f71e7bb0d2d4e14e86dfb4a1c0b739515cd1b5bb543e10b49138567d1d03"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.330605 4680 generic.go:334] "Generic (PLEG): container finished" podID="e8fdc0f7-213f-408f-9ae7-590b8e900e28" containerID="f6ad77995abfd86e5669048e685315a648217bd21797c45bb1f28f7ce1eed326" exitCode=0 Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.330652 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7605-account-create-update-llm8x" event={"ID":"e8fdc0f7-213f-408f-9ae7-590b8e900e28","Type":"ContainerDied","Data":"f6ad77995abfd86e5669048e685315a648217bd21797c45bb1f28f7ce1eed326"} Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.376353 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.551246482 podStartE2EDuration="38.376328557s" podCreationTimestamp="2026-01-26 16:23:18 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.746949966 +0000 UTC m=+1079.908222235" lastFinishedPulling="2026-01-26 16:23:55.572032041 +0000 UTC m=+1110.733304310" observedRunningTime="2026-01-26 16:23:56.371895191 +0000 UTC m=+1111.533167460" watchObservedRunningTime="2026-01-26 16:23:56.376328557 +0000 UTC m=+1111.537600826" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.528640 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.657549 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-dns-svc\") pod \"662eacd6-016f-459a-806f-4bf940065b6a\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.657867 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-config\") pod \"662eacd6-016f-459a-806f-4bf940065b6a\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.658280 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmpkv\" (UniqueName: \"kubernetes.io/projected/662eacd6-016f-459a-806f-4bf940065b6a-kube-api-access-cmpkv\") pod \"662eacd6-016f-459a-806f-4bf940065b6a\" (UID: \"662eacd6-016f-459a-806f-4bf940065b6a\") " Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.673233 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662eacd6-016f-459a-806f-4bf940065b6a-kube-api-access-cmpkv" (OuterVolumeSpecName: "kube-api-access-cmpkv") pod "662eacd6-016f-459a-806f-4bf940065b6a" (UID: "662eacd6-016f-459a-806f-4bf940065b6a"). InnerVolumeSpecName "kube-api-access-cmpkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.697345 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-config" (OuterVolumeSpecName: "config") pod "662eacd6-016f-459a-806f-4bf940065b6a" (UID: "662eacd6-016f-459a-806f-4bf940065b6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.710517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "662eacd6-016f-459a-806f-4bf940065b6a" (UID: "662eacd6-016f-459a-806f-4bf940065b6a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.759867 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.759911 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662eacd6-016f-459a-806f-4bf940065b6a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:56 crc kubenswrapper[4680]: I0126 16:23:56.759924 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmpkv\" (UniqueName: \"kubernetes.io/projected/662eacd6-016f-459a-806f-4bf940065b6a-kube-api-access-cmpkv\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.339634 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" event={"ID":"662eacd6-016f-459a-806f-4bf940065b6a","Type":"ContainerDied","Data":"a7959fabc2181069f0928ae923088f2b5512c794e3de8a3f31ce0b0163c6a066"} Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.339698 4680 scope.go:117] "RemoveContainer" containerID="93a266f10df23be2dd520422f8354149c408ac104a2425cba95ea85885983787" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.346823 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d4ff7b85-svzjv" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.378232 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d4ff7b85-svzjv"] Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.396646 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d4ff7b85-svzjv"] Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.406529 4680 scope.go:117] "RemoveContainer" containerID="e5da89781776f8576f76e02b8e5e866b39eca83d4c58951e98752b266825c338" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.776030 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.889258 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a8cc63a-ae3e-494d-b906-9c2d31441be3-operator-scripts\") pod \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.889779 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5vxb\" (UniqueName: \"kubernetes.io/projected/7a8cc63a-ae3e-494d-b906-9c2d31441be3-kube-api-access-s5vxb\") pod \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\" (UID: \"7a8cc63a-ae3e-494d-b906-9c2d31441be3\") " Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.891409 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8cc63a-ae3e-494d-b906-9c2d31441be3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a8cc63a-ae3e-494d-b906-9c2d31441be3" (UID: "7a8cc63a-ae3e-494d-b906-9c2d31441be3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.902394 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8cc63a-ae3e-494d-b906-9c2d31441be3-kube-api-access-s5vxb" (OuterVolumeSpecName: "kube-api-access-s5vxb") pod "7a8cc63a-ae3e-494d-b906-9c2d31441be3" (UID: "7a8cc63a-ae3e-494d-b906-9c2d31441be3"). InnerVolumeSpecName "kube-api-access-s5vxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.998043 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a8cc63a-ae3e-494d-b906-9c2d31441be3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:57 crc kubenswrapper[4680]: I0126 16:23:57.998116 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5vxb\" (UniqueName: \"kubernetes.io/projected/7a8cc63a-ae3e-494d-b906-9c2d31441be3-kube-api-access-s5vxb\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.041028 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.049691 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.074001 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.088160 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.099804 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-525dd" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.100012 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zckn\" (UniqueName: \"kubernetes.io/projected/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-kube-api-access-5zckn\") pod \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.100053 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8fdc0f7-213f-408f-9ae7-590b8e900e28-operator-scripts\") pod \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.100277 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-operator-scripts\") pod \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\" (UID: \"e092ed22-18ed-48b1-9d0d-43b93d8a60c6\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.100313 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6ltv\" (UniqueName: \"kubernetes.io/projected/e8fdc0f7-213f-408f-9ae7-590b8e900e28-kube-api-access-r6ltv\") pod \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\" (UID: \"e8fdc0f7-213f-408f-9ae7-590b8e900e28\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.101016 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8fdc0f7-213f-408f-9ae7-590b8e900e28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8fdc0f7-213f-408f-9ae7-590b8e900e28" (UID: "e8fdc0f7-213f-408f-9ae7-590b8e900e28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.101500 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e092ed22-18ed-48b1-9d0d-43b93d8a60c6" (UID: "e092ed22-18ed-48b1-9d0d-43b93d8a60c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.108025 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-kube-api-access-5zckn" (OuterVolumeSpecName: "kube-api-access-5zckn") pod "e092ed22-18ed-48b1-9d0d-43b93d8a60c6" (UID: "e092ed22-18ed-48b1-9d0d-43b93d8a60c6"). InnerVolumeSpecName "kube-api-access-5zckn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.111301 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fdc0f7-213f-408f-9ae7-590b8e900e28-kube-api-access-r6ltv" (OuterVolumeSpecName: "kube-api-access-r6ltv") pod "e8fdc0f7-213f-408f-9ae7-590b8e900e28" (UID: "e8fdc0f7-213f-408f-9ae7-590b8e900e28"). InnerVolumeSpecName "kube-api-access-r6ltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.201774 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-564x9\" (UniqueName: \"kubernetes.io/projected/6087753a-56bf-4286-9ec8-fe1ce34d08f7-kube-api-access-564x9\") pod \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.202090 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6k2\" (UniqueName: \"kubernetes.io/projected/71f54278-5c8d-45e7-9d36-127fff79e22a-kube-api-access-nq6k2\") pod \"71f54278-5c8d-45e7-9d36-127fff79e22a\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.202249 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6087753a-56bf-4286-9ec8-fe1ce34d08f7-operator-scripts\") pod \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\" (UID: \"6087753a-56bf-4286-9ec8-fe1ce34d08f7\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.202353 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f54278-5c8d-45e7-9d36-127fff79e22a-operator-scripts\") pod \"71f54278-5c8d-45e7-9d36-127fff79e22a\" (UID: \"71f54278-5c8d-45e7-9d36-127fff79e22a\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.202482 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9kb\" (UniqueName: \"kubernetes.io/projected/4fdcb5d9-3066-4592-a0db-290c55aa87d6-kube-api-access-lx9kb\") pod \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.202596 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fdcb5d9-3066-4592-a0db-290c55aa87d6-operator-scripts\") pod \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\" (UID: \"4fdcb5d9-3066-4592-a0db-290c55aa87d6\") " Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.202953 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.203804 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6ltv\" (UniqueName: \"kubernetes.io/projected/e8fdc0f7-213f-408f-9ae7-590b8e900e28-kube-api-access-r6ltv\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.203955 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zckn\" (UniqueName: \"kubernetes.io/projected/e092ed22-18ed-48b1-9d0d-43b93d8a60c6-kube-api-access-5zckn\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.204020 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8fdc0f7-213f-408f-9ae7-590b8e900e28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.203435 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f54278-5c8d-45e7-9d36-127fff79e22a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71f54278-5c8d-45e7-9d36-127fff79e22a" (UID: "71f54278-5c8d-45e7-9d36-127fff79e22a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.203554 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6087753a-56bf-4286-9ec8-fe1ce34d08f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6087753a-56bf-4286-9ec8-fe1ce34d08f7" (UID: "6087753a-56bf-4286-9ec8-fe1ce34d08f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.204028 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdcb5d9-3066-4592-a0db-290c55aa87d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fdcb5d9-3066-4592-a0db-290c55aa87d6" (UID: "4fdcb5d9-3066-4592-a0db-290c55aa87d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.204420 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6087753a-56bf-4286-9ec8-fe1ce34d08f7-kube-api-access-564x9" (OuterVolumeSpecName: "kube-api-access-564x9") pod "6087753a-56bf-4286-9ec8-fe1ce34d08f7" (UID: "6087753a-56bf-4286-9ec8-fe1ce34d08f7"). InnerVolumeSpecName "kube-api-access-564x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.204980 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f54278-5c8d-45e7-9d36-127fff79e22a-kube-api-access-nq6k2" (OuterVolumeSpecName: "kube-api-access-nq6k2") pod "71f54278-5c8d-45e7-9d36-127fff79e22a" (UID: "71f54278-5c8d-45e7-9d36-127fff79e22a"). InnerVolumeSpecName "kube-api-access-nq6k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.206229 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdcb5d9-3066-4592-a0db-290c55aa87d6-kube-api-access-lx9kb" (OuterVolumeSpecName: "kube-api-access-lx9kb") pod "4fdcb5d9-3066-4592-a0db-290c55aa87d6" (UID: "4fdcb5d9-3066-4592-a0db-290c55aa87d6"). InnerVolumeSpecName "kube-api-access-lx9kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.305646 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-564x9\" (UniqueName: \"kubernetes.io/projected/6087753a-56bf-4286-9ec8-fe1ce34d08f7-kube-api-access-564x9\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.305705 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6k2\" (UniqueName: \"kubernetes.io/projected/71f54278-5c8d-45e7-9d36-127fff79e22a-kube-api-access-nq6k2\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.305725 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6087753a-56bf-4286-9ec8-fe1ce34d08f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.305743 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f54278-5c8d-45e7-9d36-127fff79e22a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.305761 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx9kb\" (UniqueName: \"kubernetes.io/projected/4fdcb5d9-3066-4592-a0db-290c55aa87d6-kube-api-access-lx9kb\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.305778 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fdcb5d9-3066-4592-a0db-290c55aa87d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.354178 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fa50-account-create-update-vxdxg" event={"ID":"e092ed22-18ed-48b1-9d0d-43b93d8a60c6","Type":"ContainerDied","Data":"2d3624a92ce47f7612af77c48f385bd47c6f434827dda98b3323908906ae59a2"} Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.354252 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d3624a92ce47f7612af77c48f385bd47c6f434827dda98b3323908906ae59a2" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.354363 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fa50-account-create-update-vxdxg" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.356423 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7dmt5" event={"ID":"7a8cc63a-ae3e-494d-b906-9c2d31441be3","Type":"ContainerDied","Data":"c64c21d094f4a811615c21f4c2b6d03eddda8f193dd382326a2fee5f7b2c1643"} Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.356470 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64c21d094f4a811615c21f4c2b6d03eddda8f193dd382326a2fee5f7b2c1643" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.356449 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7dmt5" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.358487 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7605-account-create-update-llm8x" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.359607 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7605-account-create-update-llm8x" event={"ID":"e8fdc0f7-213f-408f-9ae7-590b8e900e28","Type":"ContainerDied","Data":"e8902c18cebffb2f624931cceaaffca4734869d11fba39a683188e6ef78dec06"} Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.359637 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8902c18cebffb2f624931cceaaffca4734869d11fba39a683188e6ef78dec06" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.361574 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s2hgk" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.363768 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s2hgk" event={"ID":"4fdcb5d9-3066-4592-a0db-290c55aa87d6","Type":"ContainerDied","Data":"63f40af307f21ea8509bac9b696174a53d506f4cde1c48c8d80c9c8397a63e78"} Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.364158 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f40af307f21ea8509bac9b696174a53d506f4cde1c48c8d80c9c8397a63e78" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.366446 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-525dd" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.366465 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-525dd" event={"ID":"6087753a-56bf-4286-9ec8-fe1ce34d08f7","Type":"ContainerDied","Data":"285e942288254f9bb6b8a44da531e04dc35c2b8d196dbf4e73d172f36b4f69b0"} Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.366487 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285e942288254f9bb6b8a44da531e04dc35c2b8d196dbf4e73d172f36b4f69b0" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.367770 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-73bf-account-create-update-clmpj" event={"ID":"71f54278-5c8d-45e7-9d36-127fff79e22a","Type":"ContainerDied","Data":"bf8d39519228da375c3c4f4e7794eefccf182e1bfc84d57c40d0b2901e429579"} Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.367797 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8d39519228da375c3c4f4e7794eefccf182e1bfc84d57c40d0b2901e429579" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.367832 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-73bf-account-create-update-clmpj" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.708629 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:58 crc kubenswrapper[4680]: I0126 16:23:58.749207 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 16:23:59 crc kubenswrapper[4680]: I0126 16:23:59.178959 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662eacd6-016f-459a-806f-4bf940065b6a" path="/var/lib/kubelet/pods/662eacd6-016f-459a-806f-4bf940065b6a/volumes" Jan 26 16:23:59 crc kubenswrapper[4680]: I0126 16:23:59.376715 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerID="25f7c36e2bddefeb02f9af01dd7ea1c96b7d9f77d92fb7094ee8088e307a2234" exitCode=0 Jan 26 16:23:59 crc kubenswrapper[4680]: I0126 16:23:59.376845 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4","Type":"ContainerDied","Data":"25f7c36e2bddefeb02f9af01dd7ea1c96b7d9f77d92fb7094ee8088e307a2234"} Jan 26 16:23:59 crc kubenswrapper[4680]: I0126 16:23:59.377534 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.286994 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bqckc"] Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.289532 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f54278-5c8d-45e7-9d36-127fff79e22a" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.292367 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f54278-5c8d-45e7-9d36-127fff79e22a" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.292504 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fdc0f7-213f-408f-9ae7-590b8e900e28" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.292622 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fdc0f7-213f-408f-9ae7-590b8e900e28" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.292715 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdcb5d9-3066-4592-a0db-290c55aa87d6" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.292799 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdcb5d9-3066-4592-a0db-290c55aa87d6" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.292902 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e092ed22-18ed-48b1-9d0d-43b93d8a60c6" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.292994 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e092ed22-18ed-48b1-9d0d-43b93d8a60c6" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.293100 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8cc63a-ae3e-494d-b906-9c2d31441be3" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.293194 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8cc63a-ae3e-494d-b906-9c2d31441be3" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.293273 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662eacd6-016f-459a-806f-4bf940065b6a" containerName="init" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.293351 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="662eacd6-016f-459a-806f-4bf940065b6a" containerName="init" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.293431 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662eacd6-016f-459a-806f-4bf940065b6a" containerName="dnsmasq-dns" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.293506 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="662eacd6-016f-459a-806f-4bf940065b6a" containerName="dnsmasq-dns" Jan 26 16:24:00 crc kubenswrapper[4680]: E0126 16:24:00.293592 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6087753a-56bf-4286-9ec8-fe1ce34d08f7" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.293674 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6087753a-56bf-4286-9ec8-fe1ce34d08f7" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294092 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e092ed22-18ed-48b1-9d0d-43b93d8a60c6" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294234 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f54278-5c8d-45e7-9d36-127fff79e22a" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294317 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8cc63a-ae3e-494d-b906-9c2d31441be3" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294403 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fdc0f7-213f-408f-9ae7-590b8e900e28" containerName="mariadb-account-create-update" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294491 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6087753a-56bf-4286-9ec8-fe1ce34d08f7" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294581 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="662eacd6-016f-459a-806f-4bf940065b6a" containerName="dnsmasq-dns" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.294662 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdcb5d9-3066-4592-a0db-290c55aa87d6" containerName="mariadb-database-create" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.295831 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.304198 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.313579 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bqckc"] Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.351096 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a05291-782f-440c-9906-05e00ccde7d5-operator-scripts\") pod \"root-account-create-update-bqckc\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.351182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfxsf\" (UniqueName: \"kubernetes.io/projected/89a05291-782f-440c-9906-05e00ccde7d5-kube-api-access-cfxsf\") pod \"root-account-create-update-bqckc\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.425871 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.465052 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a05291-782f-440c-9906-05e00ccde7d5-operator-scripts\") pod \"root-account-create-update-bqckc\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.480414 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a05291-782f-440c-9906-05e00ccde7d5-operator-scripts\") pod \"root-account-create-update-bqckc\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.480530 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfxsf\" (UniqueName: \"kubernetes.io/projected/89a05291-782f-440c-9906-05e00ccde7d5-kube-api-access-cfxsf\") pod \"root-account-create-update-bqckc\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.513533 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfxsf\" (UniqueName: \"kubernetes.io/projected/89a05291-782f-440c-9906-05e00ccde7d5-kube-api-access-cfxsf\") pod \"root-account-create-update-bqckc\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.618966 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.620236 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.627829 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.628163 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.628563 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.634184 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.642386 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-258vn" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.647090 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.683929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.684304 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbmh\" (UniqueName: \"kubernetes.io/projected/f0c59efc-bcc3-4543-8fe5-2a89f785d222-kube-api-access-xxbmh\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.684348 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c59efc-bcc3-4543-8fe5-2a89f785d222-config\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.684384 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f0c59efc-bcc3-4543-8fe5-2a89f785d222-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.684418 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.684467 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.684491 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0c59efc-bcc3-4543-8fe5-2a89f785d222-scripts\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.786797 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.786854 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0c59efc-bcc3-4543-8fe5-2a89f785d222-scripts\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.787063 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.787146 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbmh\" (UniqueName: \"kubernetes.io/projected/f0c59efc-bcc3-4543-8fe5-2a89f785d222-kube-api-access-xxbmh\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.787196 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c59efc-bcc3-4543-8fe5-2a89f785d222-config\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.787239 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f0c59efc-bcc3-4543-8fe5-2a89f785d222-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.787271 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.788372 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f0c59efc-bcc3-4543-8fe5-2a89f785d222-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.789120 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0c59efc-bcc3-4543-8fe5-2a89f785d222-scripts\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.789493 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c59efc-bcc3-4543-8fe5-2a89f785d222-config\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.794250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.794454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.810964 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbmh\" (UniqueName: \"kubernetes.io/projected/f0c59efc-bcc3-4543-8fe5-2a89f785d222-kube-api-access-xxbmh\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.814432 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c59efc-bcc3-4543-8fe5-2a89f785d222-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f0c59efc-bcc3-4543-8fe5-2a89f785d222\") " pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.934536 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 16:24:00 crc kubenswrapper[4680]: I0126 16:24:00.956425 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bqckc"] Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.394082 4680 generic.go:334] "Generic (PLEG): container finished" podID="89a05291-782f-440c-9906-05e00ccde7d5" containerID="a43e37fe910400592dcea87fbd4a4d925c823cfae992be4f3e8c3f5c81d03ee3" exitCode=0 Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.394172 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bqckc" event={"ID":"89a05291-782f-440c-9906-05e00ccde7d5","Type":"ContainerDied","Data":"a43e37fe910400592dcea87fbd4a4d925c823cfae992be4f3e8c3f5c81d03ee3"} Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.394576 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bqckc" event={"ID":"89a05291-782f-440c-9906-05e00ccde7d5","Type":"ContainerStarted","Data":"527b0babfeebd8ade245053bdb9df13d07340c34f64998cf920db0036519855d"} Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.396740 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4","Type":"ContainerStarted","Data":"1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226"} Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.400166 4680 generic.go:334] "Generic (PLEG): container finished" podID="4c79b7df-0dfd-42eb-bb02-09bf51f250e2" containerID="176a6417bfef7e3e75818fd3f52ff758c3cd01e3b6945e996e8aea5f6ba2c07a" exitCode=0 Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.400213 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2zhh" event={"ID":"4c79b7df-0dfd-42eb-bb02-09bf51f250e2","Type":"ContainerDied","Data":"176a6417bfef7e3e75818fd3f52ff758c3cd01e3b6945e996e8aea5f6ba2c07a"} Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.429376 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 16:24:01 crc kubenswrapper[4680]: I0126 16:24:01.429818 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371986.424967 podStartE2EDuration="50.429808952s" podCreationTimestamp="2026-01-26 16:23:11 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.489376717 +0000 UTC m=+1079.650648986" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:01.42833387 +0000 UTC m=+1116.589606139" watchObservedRunningTime="2026-01-26 16:24:01.429808952 +0000 UTC m=+1116.591081221" Jan 26 16:24:01 crc kubenswrapper[4680]: W0126 16:24:01.433360 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c59efc_bcc3_4543_8fe5_2a89f785d222.slice/crio-18b4ae714de60059f77328f8a77991ee6b306bcb12bc002e833eae7966a50bdb WatchSource:0}: Error finding container 18b4ae714de60059f77328f8a77991ee6b306bcb12bc002e833eae7966a50bdb: Status 404 returned error can't find the container with id 18b4ae714de60059f77328f8a77991ee6b306bcb12bc002e833eae7966a50bdb Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.409395 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f0c59efc-bcc3-4543-8fe5-2a89f785d222","Type":"ContainerStarted","Data":"ff98d756a31b4be3b640989de8c9aaada3ae39a448fd56fd0a4a96588b11989d"} Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.411174 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f0c59efc-bcc3-4543-8fe5-2a89f785d222","Type":"ContainerStarted","Data":"ee09ed7492b7d918d2c32efdc5bc186f9cb9bff53126be1e8d9fafcc6b3cd3f6"} Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.411299 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f0c59efc-bcc3-4543-8fe5-2a89f785d222","Type":"ContainerStarted","Data":"18b4ae714de60059f77328f8a77991ee6b306bcb12bc002e833eae7966a50bdb"} Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.437061 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.9443964459999998 podStartE2EDuration="2.437040397s" podCreationTimestamp="2026-01-26 16:24:00 +0000 UTC" firstStartedPulling="2026-01-26 16:24:01.435957796 +0000 UTC m=+1116.597230075" lastFinishedPulling="2026-01-26 16:24:01.928601757 +0000 UTC m=+1117.089874026" observedRunningTime="2026-01-26 16:24:02.429500043 +0000 UTC m=+1117.590772322" watchObservedRunningTime="2026-01-26 16:24:02.437040397 +0000 UTC m=+1117.598312666" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.664182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.677355 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/101eb26d-100c-478e-bb84-dcc69e480c11-etc-swift\") pod \"swift-storage-0\" (UID: \"101eb26d-100c-478e-bb84-dcc69e480c11\") " pod="openstack/swift-storage-0" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.709501 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.834118 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.840414 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.968877 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-scripts\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.968975 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-combined-ca-bundle\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969007 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-etc-swift\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969126 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-swiftconf\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969167 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-ring-data-devices\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969197 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7kd\" (UniqueName: \"kubernetes.io/projected/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-kube-api-access-hq7kd\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969257 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfxsf\" (UniqueName: \"kubernetes.io/projected/89a05291-782f-440c-9906-05e00ccde7d5-kube-api-access-cfxsf\") pod \"89a05291-782f-440c-9906-05e00ccde7d5\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969281 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a05291-782f-440c-9906-05e00ccde7d5-operator-scripts\") pod \"89a05291-782f-440c-9906-05e00ccde7d5\" (UID: \"89a05291-782f-440c-9906-05e00ccde7d5\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969303 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-dispersionconf\") pod \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\" (UID: \"4c79b7df-0dfd-42eb-bb02-09bf51f250e2\") " Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969780 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.969999 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.970944 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a05291-782f-440c-9906-05e00ccde7d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89a05291-782f-440c-9906-05e00ccde7d5" (UID: "89a05291-782f-440c-9906-05e00ccde7d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.982735 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a05291-782f-440c-9906-05e00ccde7d5-kube-api-access-cfxsf" (OuterVolumeSpecName: "kube-api-access-cfxsf") pod "89a05291-782f-440c-9906-05e00ccde7d5" (UID: "89a05291-782f-440c-9906-05e00ccde7d5"). InnerVolumeSpecName "kube-api-access-cfxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.987233 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-kube-api-access-hq7kd" (OuterVolumeSpecName: "kube-api-access-hq7kd") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "kube-api-access-hq7kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.992816 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:02 crc kubenswrapper[4680]: I0126 16:24:02.994406 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.005108 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-scripts" (OuterVolumeSpecName: "scripts") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.009020 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c79b7df-0dfd-42eb-bb02-09bf51f250e2" (UID: "4c79b7df-0dfd-42eb-bb02-09bf51f250e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.035582 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.035889 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071425 4680 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071468 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071483 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071496 4680 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071507 4680 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071517 4680 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071530 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7kd\" (UniqueName: \"kubernetes.io/projected/4c79b7df-0dfd-42eb-bb02-09bf51f250e2-kube-api-access-hq7kd\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071544 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfxsf\" (UniqueName: \"kubernetes.io/projected/89a05291-782f-440c-9906-05e00ccde7d5-kube-api-access-cfxsf\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.071554 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a05291-782f-440c-9906-05e00ccde7d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.356527 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 16:24:03 crc kubenswrapper[4680]: W0126 16:24:03.375062 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod101eb26d_100c_478e_bb84_dcc69e480c11.slice/crio-f0ddf6eeab77e96d4a2c54a13b0e34c0e1a0165f39334162f5ee61bbdfb67757 WatchSource:0}: Error finding container f0ddf6eeab77e96d4a2c54a13b0e34c0e1a0165f39334162f5ee61bbdfb67757: Status 404 returned error can't find the container with id f0ddf6eeab77e96d4a2c54a13b0e34c0e1a0165f39334162f5ee61bbdfb67757 Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.424570 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"f0ddf6eeab77e96d4a2c54a13b0e34c0e1a0165f39334162f5ee61bbdfb67757"} Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.426963 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqckc" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.426951 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bqckc" event={"ID":"89a05291-782f-440c-9906-05e00ccde7d5","Type":"ContainerDied","Data":"527b0babfeebd8ade245053bdb9df13d07340c34f64998cf920db0036519855d"} Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.427127 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527b0babfeebd8ade245053bdb9df13d07340c34f64998cf920db0036519855d" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.432929 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2zhh" event={"ID":"4c79b7df-0dfd-42eb-bb02-09bf51f250e2","Type":"ContainerDied","Data":"1fea89628ae2c38b4634310a8fb4cc99b39f18bcbb8d57e2b88b048348072dc0"} Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.432981 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fea89628ae2c38b4634310a8fb4cc99b39f18bcbb8d57e2b88b048348072dc0" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.432942 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2zhh" Jan 26 16:24:03 crc kubenswrapper[4680]: I0126 16:24:03.433114 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.088259 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xtsg9"] Jan 26 16:24:04 crc kubenswrapper[4680]: E0126 16:24:04.088569 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05291-782f-440c-9906-05e00ccde7d5" containerName="mariadb-account-create-update" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.088581 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05291-782f-440c-9906-05e00ccde7d5" containerName="mariadb-account-create-update" Jan 26 16:24:04 crc kubenswrapper[4680]: E0126 16:24:04.088611 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c79b7df-0dfd-42eb-bb02-09bf51f250e2" containerName="swift-ring-rebalance" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.088618 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c79b7df-0dfd-42eb-bb02-09bf51f250e2" containerName="swift-ring-rebalance" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.088756 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c79b7df-0dfd-42eb-bb02-09bf51f250e2" containerName="swift-ring-rebalance" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.088767 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a05291-782f-440c-9906-05e00ccde7d5" containerName="mariadb-account-create-update" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.089407 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.095043 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65zjq" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.095392 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.110167 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xtsg9"] Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.232744 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-config-data\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.233240 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkxd\" (UniqueName: \"kubernetes.io/projected/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-kube-api-access-5qkxd\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.233274 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-combined-ca-bundle\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.233308 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-db-sync-config-data\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.334633 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-config-data\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.334701 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkxd\" (UniqueName: \"kubernetes.io/projected/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-kube-api-access-5qkxd\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.334737 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-combined-ca-bundle\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.334773 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-db-sync-config-data\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.345561 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-config-data\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.346520 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-db-sync-config-data\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.352148 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-combined-ca-bundle\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.356324 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkxd\" (UniqueName: \"kubernetes.io/projected/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-kube-api-access-5qkxd\") pod \"glance-db-sync-xtsg9\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.406511 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.444714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"b0983c544f6dcdc72a2b27542ce41ec3e005f3bc8a8160a7c4a03b9744a1d4c3"} Jan 26 16:24:04 crc kubenswrapper[4680]: I0126 16:24:04.892179 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xtsg9"] Jan 26 16:24:05 crc kubenswrapper[4680]: I0126 16:24:05.437535 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 16:24:05 crc kubenswrapper[4680]: I0126 16:24:05.465514 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"565388a10da4c0c60744c43fe31c5ef784208a78b0b46ae354b161a97310155e"} Jan 26 16:24:05 crc kubenswrapper[4680]: I0126 16:24:05.465569 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"3fc75d8ca7868522ea9d5a20d42f35dfae3696e367563769c0bcab301287ec88"} Jan 26 16:24:05 crc kubenswrapper[4680]: I0126 16:24:05.465585 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"ad6e738b635a739bdb8886fe1c115a59cc06edfdacec44ba24dc9984311abbe6"} Jan 26 16:24:05 crc kubenswrapper[4680]: I0126 16:24:05.467268 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtsg9" event={"ID":"a97d5f1e-6cd5-4ec0-a10d-203a5c896353","Type":"ContainerStarted","Data":"2079492d414d31b965ba630f9b3e75df3d5600fb630be3d3943a49d6662f0926"} Jan 26 16:24:06 crc kubenswrapper[4680]: I0126 16:24:06.481220 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"2b8ea0757544a6322c39ee0014602bd39a519116815fa6559259b1a40a4b264c"} Jan 26 16:24:06 crc kubenswrapper[4680]: I0126 16:24:06.481618 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"0181fe50784fa3864750ff0ee00110f8d1c08cdcee16255c3a9cd7158233251c"} Jan 26 16:24:06 crc kubenswrapper[4680]: I0126 16:24:06.481636 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"e72dc502f1c342f38d7561ec09450b4e6f753fa3016d24cdec7b3f0f4f66d0b6"} Jan 26 16:24:07 crc kubenswrapper[4680]: I0126 16:24:07.500673 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"bca264880e09832a6adaabacc1c89ee45421dcbb95d64c64dd20976c83af617b"} Jan 26 16:24:07 crc kubenswrapper[4680]: I0126 16:24:07.618356 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 16:24:07 crc kubenswrapper[4680]: I0126 16:24:07.698363 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 16:24:08 crc kubenswrapper[4680]: I0126 16:24:08.522132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"43693975c8e2acc7931c5c7860fc7165de39eb3fe96bddb13cff44cb6014b92f"} Jan 26 16:24:08 crc kubenswrapper[4680]: I0126 16:24:08.522488 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"b947a6b67828f0d05289495b14e107e1b419593884c13c2303c7f659a4e2844f"} Jan 26 16:24:08 crc kubenswrapper[4680]: I0126 16:24:08.522503 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"4b473d728ed1e6a14e292a7f20836f0453b78408d90e6aa8f95a24bb65da0001"} Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.537151 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"8b6d72364e7f9ced58fa3ec6c735b105456b7c483450a436feeb6b7486984bd5"} Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.537467 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"0ea481db754bce27895829b4d0e601f83ab83388aa2552b7c47ddd8147c56b20"} Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.537480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"4c7961911e0e2f3bb2cc58712a6c2ba260347f5e2c4610aa2763d100180ae34e"} Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.537489 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"101eb26d-100c-478e-bb84-dcc69e480c11","Type":"ContainerStarted","Data":"87b1228d1b167ecce2767c0d74a611975f43087113afc70c7dd6b5133e17dc60"} Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.574003 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.194248126 podStartE2EDuration="24.573987371s" podCreationTimestamp="2026-01-26 16:23:45 +0000 UTC" firstStartedPulling="2026-01-26 16:24:03.378900467 +0000 UTC m=+1118.540172736" lastFinishedPulling="2026-01-26 16:24:07.758639712 +0000 UTC m=+1122.919911981" observedRunningTime="2026-01-26 16:24:09.567196778 +0000 UTC m=+1124.728469057" watchObservedRunningTime="2026-01-26 16:24:09.573987371 +0000 UTC m=+1124.735259640" Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.897971 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b94dfffbc-p69gb"] Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.899171 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.902019 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 16:24:09 crc kubenswrapper[4680]: I0126 16:24:09.926086 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b94dfffbc-p69gb"] Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.033056 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-nb\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.033125 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-config\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.033154 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-svc\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.033182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-swift-storage-0\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.033238 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfq4w\" (UniqueName: \"kubernetes.io/projected/d1edca43-0123-4c22-83ae-6de4ef44db36-kube-api-access-pfq4w\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.033295 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-sb\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.134614 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-sb\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.134697 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-nb\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.134733 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-config\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.134759 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-svc\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.134789 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-swift-storage-0\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.134824 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfq4w\" (UniqueName: \"kubernetes.io/projected/d1edca43-0123-4c22-83ae-6de4ef44db36-kube-api-access-pfq4w\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.135777 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-sb\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.135829 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-config\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.135978 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-swift-storage-0\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.135980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-svc\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.136339 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-nb\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.154334 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfq4w\" (UniqueName: \"kubernetes.io/projected/d1edca43-0123-4c22-83ae-6de4ef44db36-kube-api-access-pfq4w\") pod \"dnsmasq-dns-7b94dfffbc-p69gb\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.213392 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:10 crc kubenswrapper[4680]: I0126 16:24:10.678755 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b94dfffbc-p69gb"] Jan 26 16:24:10 crc kubenswrapper[4680]: W0126 16:24:10.707084 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1edca43_0123_4c22_83ae_6de4ef44db36.slice/crio-ef63af88a5bfccf4c3c2946f7f2a52785be195a28fd52fdf323faa093db6e8ff WatchSource:0}: Error finding container ef63af88a5bfccf4c3c2946f7f2a52785be195a28fd52fdf323faa093db6e8ff: Status 404 returned error can't find the container with id ef63af88a5bfccf4c3c2946f7f2a52785be195a28fd52fdf323faa093db6e8ff Jan 26 16:24:11 crc kubenswrapper[4680]: E0126 16:24:11.353728 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad417dd7_c38c_4934_a895_d0253bb03494.slice/crio-84ffc9794e476e25f8d2a669fe751a60e111aa3beb943ac132db59158c8a2961.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad417dd7_c38c_4934_a895_d0253bb03494.slice/crio-conmon-84ffc9794e476e25f8d2a669fe751a60e111aa3beb943ac132db59158c8a2961.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.576483 4680 generic.go:334] "Generic (PLEG): container finished" podID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerID="7054a2cc380f039ebb9edb2c5103ef606ae60293f81d2038e21f08e9df3efbc5" exitCode=0 Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.576596 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b7b1e0b-5218-426e-aca1-76d49633811c","Type":"ContainerDied","Data":"7054a2cc380f039ebb9edb2c5103ef606ae60293f81d2038e21f08e9df3efbc5"} Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.584856 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad417dd7-c38c-4934-a895-d0253bb03494" containerID="84ffc9794e476e25f8d2a669fe751a60e111aa3beb943ac132db59158c8a2961" exitCode=0 Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.584942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad417dd7-c38c-4934-a895-d0253bb03494","Type":"ContainerDied","Data":"84ffc9794e476e25f8d2a669fe751a60e111aa3beb943ac132db59158c8a2961"} Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.588399 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerID="859528e529266e8b67616816014409631538109d4673e3d6647a949fa33a7c3a" exitCode=0 Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.588736 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" event={"ID":"d1edca43-0123-4c22-83ae-6de4ef44db36","Type":"ContainerDied","Data":"859528e529266e8b67616816014409631538109d4673e3d6647a949fa33a7c3a"} Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.588899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" event={"ID":"d1edca43-0123-4c22-83ae-6de4ef44db36","Type":"ContainerStarted","Data":"ef63af88a5bfccf4c3c2946f7f2a52785be195a28fd52fdf323faa093db6e8ff"} Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.708691 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bqckc"] Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.715810 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bqckc"] Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.778416 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-frv9r"] Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.779348 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.782733 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.789949 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-frv9r"] Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.874252 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/753e1d64-a470-4d8b-b715-8cc305a976af-operator-scripts\") pod \"root-account-create-update-frv9r\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.874333 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg57d\" (UniqueName: \"kubernetes.io/projected/753e1d64-a470-4d8b-b715-8cc305a976af-kube-api-access-pg57d\") pod \"root-account-create-update-frv9r\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.975838 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/753e1d64-a470-4d8b-b715-8cc305a976af-operator-scripts\") pod \"root-account-create-update-frv9r\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.976002 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg57d\" (UniqueName: \"kubernetes.io/projected/753e1d64-a470-4d8b-b715-8cc305a976af-kube-api-access-pg57d\") pod \"root-account-create-update-frv9r\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.976956 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/753e1d64-a470-4d8b-b715-8cc305a976af-operator-scripts\") pod \"root-account-create-update-frv9r\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:11 crc kubenswrapper[4680]: I0126 16:24:11.994676 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg57d\" (UniqueName: \"kubernetes.io/projected/753e1d64-a470-4d8b-b715-8cc305a976af-kube-api-access-pg57d\") pod \"root-account-create-update-frv9r\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:12 crc kubenswrapper[4680]: I0126 16:24:12.095522 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:13 crc kubenswrapper[4680]: I0126 16:24:13.198541 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a05291-782f-440c-9906-05e00ccde7d5" path="/var/lib/kubelet/pods/89a05291-782f-440c-9906-05e00ccde7d5/volumes" Jan 26 16:24:13 crc kubenswrapper[4680]: I0126 16:24:13.866291 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:24:13 crc kubenswrapper[4680]: I0126 16:24:13.867033 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f49hh" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.085480 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c86j2-config-jnpk2"] Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.087500 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.091330 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.097389 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c86j2-config-jnpk2"] Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.219538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-scripts\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.219627 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-log-ovn\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.219659 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.219686 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run-ovn\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.219733 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlvl\" (UniqueName: \"kubernetes.io/projected/5b36d351-0319-4c91-85cd-aa3dd97415aa-kube-api-access-5xlvl\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.219809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-additional-scripts\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.321697 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-log-ovn\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.321736 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.321757 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run-ovn\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.321795 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlvl\" (UniqueName: \"kubernetes.io/projected/5b36d351-0319-4c91-85cd-aa3dd97415aa-kube-api-access-5xlvl\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.321828 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-additional-scripts\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.322010 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-scripts\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.322566 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.322618 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run-ovn\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.323282 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-log-ovn\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.324032 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-additional-scripts\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.325308 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-scripts\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.357445 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlvl\" (UniqueName: \"kubernetes.io/projected/5b36d351-0319-4c91-85cd-aa3dd97415aa-kube-api-access-5xlvl\") pod \"ovn-controller-c86j2-config-jnpk2\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:14 crc kubenswrapper[4680]: I0126 16:24:14.446741 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:18 crc kubenswrapper[4680]: I0126 16:24:18.842639 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c86j2" podUID="5db7b388-c09e-441f-88db-13916a2b9208" containerName="ovn-controller" probeResult="failure" output=< Jan 26 16:24:18 crc kubenswrapper[4680]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 16:24:18 crc kubenswrapper[4680]: > Jan 26 16:24:21 crc kubenswrapper[4680]: I0126 16:24:21.044594 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 16:24:21 crc kubenswrapper[4680]: I0126 16:24:21.993326 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c86j2-config-jnpk2"] Jan 26 16:24:22 crc kubenswrapper[4680]: E0126 16:24:22.063238 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-glance-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:24:22 crc kubenswrapper[4680]: E0126 16:24:22.063546 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-glance-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:24:22 crc kubenswrapper[4680]: E0126 16:24:22.063644 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-glance-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xtsg9_openstack(a97d5f1e-6cd5-4ec0-a10d-203a5c896353): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:24:22 crc kubenswrapper[4680]: E0126 16:24:22.064908 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xtsg9" podUID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.113663 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-frv9r"] Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.682450 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b7b1e0b-5218-426e-aca1-76d49633811c","Type":"ContainerStarted","Data":"a224c0526619df9fc42d61d452f7c54f4d1fb2f05991d9519790e835d4c18784"} Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.682940 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.683888 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-frv9r" event={"ID":"753e1d64-a470-4d8b-b715-8cc305a976af","Type":"ContainerStarted","Data":"3a80b32bc81b3baef4649ff626c2687ffc152f0a4ffb1f439e902f5a68d369af"} Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.683933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-frv9r" event={"ID":"753e1d64-a470-4d8b-b715-8cc305a976af","Type":"ContainerStarted","Data":"85ac443f5cb57f448e3769eed1c3776de62b161653e5b326f1ce1a63984e33be"} Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.685493 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad417dd7-c38c-4934-a895-d0253bb03494","Type":"ContainerStarted","Data":"ef56e993d17a4c431f76847a8a65b409e91b9c019e0979c88bc3be1045841c34"} Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.685697 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.687328 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" event={"ID":"d1edca43-0123-4c22-83ae-6de4ef44db36","Type":"ContainerStarted","Data":"5994ea770791abf0561fe1cd5ef5595113ce47fcc3d466d9071cda2e439bfa3d"} Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.687455 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.688684 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c86j2-config-jnpk2" event={"ID":"5b36d351-0319-4c91-85cd-aa3dd97415aa","Type":"ContainerStarted","Data":"ee90a647e7142abaf7f0d28086f23814b1656aceec5771a48734472625d253e3"} Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.688722 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c86j2-config-jnpk2" event={"ID":"5b36d351-0319-4c91-85cd-aa3dd97415aa","Type":"ContainerStarted","Data":"a9842ca246f39896543b441b531149be324647a3df1d0cab73e84310abf9a3ef"} Jan 26 16:24:22 crc kubenswrapper[4680]: E0126 16:24:22.690827 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-glance-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/glance-db-sync-xtsg9" podUID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.714405 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.239433206 podStartE2EDuration="1m14.71438301s" podCreationTimestamp="2026-01-26 16:23:08 +0000 UTC" firstStartedPulling="2026-01-26 16:23:24.111841152 +0000 UTC m=+1079.273113421" lastFinishedPulling="2026-01-26 16:23:37.586790956 +0000 UTC m=+1092.748063225" observedRunningTime="2026-01-26 16:24:22.703618495 +0000 UTC m=+1137.864890764" watchObservedRunningTime="2026-01-26 16:24:22.71438301 +0000 UTC m=+1137.875655279" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.733647 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=58.642087923 podStartE2EDuration="1m13.733631166s" podCreationTimestamp="2026-01-26 16:23:09 +0000 UTC" firstStartedPulling="2026-01-26 16:23:22.701329342 +0000 UTC m=+1077.862601611" lastFinishedPulling="2026-01-26 16:23:37.792872585 +0000 UTC m=+1092.954144854" observedRunningTime="2026-01-26 16:24:22.724026994 +0000 UTC m=+1137.885299263" watchObservedRunningTime="2026-01-26 16:24:22.733631166 +0000 UTC m=+1137.894903435" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.748052 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podStartSLOduration=13.748027645 podStartE2EDuration="13.748027645s" podCreationTimestamp="2026-01-26 16:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:22.743443845 +0000 UTC m=+1137.904716124" watchObservedRunningTime="2026-01-26 16:24:22.748027645 +0000 UTC m=+1137.909299914" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.783561 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-frv9r" podStartSLOduration=11.783543393 podStartE2EDuration="11.783543393s" podCreationTimestamp="2026-01-26 16:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:22.778348605 +0000 UTC m=+1137.939620874" watchObservedRunningTime="2026-01-26 16:24:22.783543393 +0000 UTC m=+1137.944815662" Jan 26 16:24:22 crc kubenswrapper[4680]: I0126 16:24:22.807159 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c86j2-config-jnpk2" podStartSLOduration=8.807139512 podStartE2EDuration="8.807139512s" podCreationTimestamp="2026-01-26 16:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:22.798826067 +0000 UTC m=+1137.960098346" watchObservedRunningTime="2026-01-26 16:24:22.807139512 +0000 UTC m=+1137.968411781" Jan 26 16:24:23 crc kubenswrapper[4680]: I0126 16:24:23.696759 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b36d351-0319-4c91-85cd-aa3dd97415aa" containerID="ee90a647e7142abaf7f0d28086f23814b1656aceec5771a48734472625d253e3" exitCode=0 Jan 26 16:24:23 crc kubenswrapper[4680]: I0126 16:24:23.696843 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c86j2-config-jnpk2" event={"ID":"5b36d351-0319-4c91-85cd-aa3dd97415aa","Type":"ContainerDied","Data":"ee90a647e7142abaf7f0d28086f23814b1656aceec5771a48734472625d253e3"} Jan 26 16:24:23 crc kubenswrapper[4680]: I0126 16:24:23.699057 4680 generic.go:334] "Generic (PLEG): container finished" podID="753e1d64-a470-4d8b-b715-8cc305a976af" containerID="3a80b32bc81b3baef4649ff626c2687ffc152f0a4ffb1f439e902f5a68d369af" exitCode=0 Jan 26 16:24:23 crc kubenswrapper[4680]: I0126 16:24:23.699132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-frv9r" event={"ID":"753e1d64-a470-4d8b-b715-8cc305a976af","Type":"ContainerDied","Data":"3a80b32bc81b3baef4649ff626c2687ffc152f0a4ffb1f439e902f5a68d369af"} Jan 26 16:24:23 crc kubenswrapper[4680]: I0126 16:24:23.826658 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-c86j2" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.113411 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.205038 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.225498 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run\") pod \"5b36d351-0319-4c91-85cd-aa3dd97415aa\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.225633 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run" (OuterVolumeSpecName: "var-run") pod "5b36d351-0319-4c91-85cd-aa3dd97415aa" (UID: "5b36d351-0319-4c91-85cd-aa3dd97415aa"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.225678 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-additional-scripts\") pod \"5b36d351-0319-4c91-85cd-aa3dd97415aa\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.225724 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/753e1d64-a470-4d8b-b715-8cc305a976af-operator-scripts\") pod \"753e1d64-a470-4d8b-b715-8cc305a976af\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.226281 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5b36d351-0319-4c91-85cd-aa3dd97415aa" (UID: "5b36d351-0319-4c91-85cd-aa3dd97415aa"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.226838 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753e1d64-a470-4d8b-b715-8cc305a976af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "753e1d64-a470-4d8b-b715-8cc305a976af" (UID: "753e1d64-a470-4d8b-b715-8cc305a976af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.235711 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-scripts\") pod \"5b36d351-0319-4c91-85cd-aa3dd97415aa\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.235799 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-log-ovn\") pod \"5b36d351-0319-4c91-85cd-aa3dd97415aa\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.235827 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run-ovn\") pod \"5b36d351-0319-4c91-85cd-aa3dd97415aa\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.235863 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlvl\" (UniqueName: \"kubernetes.io/projected/5b36d351-0319-4c91-85cd-aa3dd97415aa-kube-api-access-5xlvl\") pod \"5b36d351-0319-4c91-85cd-aa3dd97415aa\" (UID: \"5b36d351-0319-4c91-85cd-aa3dd97415aa\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.235901 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg57d\" (UniqueName: \"kubernetes.io/projected/753e1d64-a470-4d8b-b715-8cc305a976af-kube-api-access-pg57d\") pod \"753e1d64-a470-4d8b-b715-8cc305a976af\" (UID: \"753e1d64-a470-4d8b-b715-8cc305a976af\") " Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.236500 4680 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.236519 4680 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.236534 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/753e1d64-a470-4d8b-b715-8cc305a976af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.236849 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5b36d351-0319-4c91-85cd-aa3dd97415aa" (UID: "5b36d351-0319-4c91-85cd-aa3dd97415aa"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.237722 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5b36d351-0319-4c91-85cd-aa3dd97415aa" (UID: "5b36d351-0319-4c91-85cd-aa3dd97415aa"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.243906 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-scripts" (OuterVolumeSpecName: "scripts") pod "5b36d351-0319-4c91-85cd-aa3dd97415aa" (UID: "5b36d351-0319-4c91-85cd-aa3dd97415aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.281521 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b36d351-0319-4c91-85cd-aa3dd97415aa-kube-api-access-5xlvl" (OuterVolumeSpecName: "kube-api-access-5xlvl") pod "5b36d351-0319-4c91-85cd-aa3dd97415aa" (UID: "5b36d351-0319-4c91-85cd-aa3dd97415aa"). InnerVolumeSpecName "kube-api-access-5xlvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.285339 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753e1d64-a470-4d8b-b715-8cc305a976af-kube-api-access-pg57d" (OuterVolumeSpecName: "kube-api-access-pg57d") pod "753e1d64-a470-4d8b-b715-8cc305a976af" (UID: "753e1d64-a470-4d8b-b715-8cc305a976af"). InnerVolumeSpecName "kube-api-access-pg57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.337854 4680 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.337904 4680 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b36d351-0319-4c91-85cd-aa3dd97415aa-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.337917 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlvl\" (UniqueName: \"kubernetes.io/projected/5b36d351-0319-4c91-85cd-aa3dd97415aa-kube-api-access-5xlvl\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.337931 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg57d\" (UniqueName: \"kubernetes.io/projected/753e1d64-a470-4d8b-b715-8cc305a976af-kube-api-access-pg57d\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.337944 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b36d351-0319-4c91-85cd-aa3dd97415aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.727646 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c86j2-config-jnpk2" event={"ID":"5b36d351-0319-4c91-85cd-aa3dd97415aa","Type":"ContainerDied","Data":"a9842ca246f39896543b441b531149be324647a3df1d0cab73e84310abf9a3ef"} Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.727686 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9842ca246f39896543b441b531149be324647a3df1d0cab73e84310abf9a3ef" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.727742 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c86j2-config-jnpk2" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.740573 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-frv9r" event={"ID":"753e1d64-a470-4d8b-b715-8cc305a976af","Type":"ContainerDied","Data":"85ac443f5cb57f448e3769eed1c3776de62b161653e5b326f1ce1a63984e33be"} Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.740611 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ac443f5cb57f448e3769eed1c3776de62b161653e5b326f1ce1a63984e33be" Jan 26 16:24:25 crc kubenswrapper[4680]: I0126 16:24:25.740631 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-frv9r" Jan 26 16:24:26 crc kubenswrapper[4680]: I0126 16:24:26.302853 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c86j2-config-jnpk2"] Jan 26 16:24:26 crc kubenswrapper[4680]: I0126 16:24:26.310639 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c86j2-config-jnpk2"] Jan 26 16:24:27 crc kubenswrapper[4680]: I0126 16:24:27.179424 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b36d351-0319-4c91-85cd-aa3dd97415aa" path="/var/lib/kubelet/pods/5b36d351-0319-4c91-85cd-aa3dd97415aa/volumes" Jan 26 16:24:30 crc kubenswrapper[4680]: I0126 16:24:30.215378 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:24:30 crc kubenswrapper[4680]: I0126 16:24:30.279740 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f78ffb8f-94fw6"] Jan 26 16:24:30 crc kubenswrapper[4680]: I0126 16:24:30.279965 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="dnsmasq-dns" containerID="cri-o://77e0e3f3ab0ff109f7218177effc95172fa2197486b92692fe7d37505e3858af" gracePeriod=10 Jan 26 16:24:30 crc kubenswrapper[4680]: I0126 16:24:30.777510 4680 generic.go:334] "Generic (PLEG): container finished" podID="977c3806-6d06-4f71-9035-67d813348eb5" containerID="77e0e3f3ab0ff109f7218177effc95172fa2197486b92692fe7d37505e3858af" exitCode=0 Jan 26 16:24:30 crc kubenswrapper[4680]: I0126 16:24:30.777583 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" event={"ID":"977c3806-6d06-4f71-9035-67d813348eb5","Type":"ContainerDied","Data":"77e0e3f3ab0ff109f7218177effc95172fa2197486b92692fe7d37505e3858af"} Jan 26 16:24:30 crc kubenswrapper[4680]: I0126 16:24:30.910775 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.430357 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.529442 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-config\") pod \"977c3806-6d06-4f71-9035-67d813348eb5\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.529576 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-dns-svc\") pod \"977c3806-6d06-4f71-9035-67d813348eb5\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.529649 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-sb\") pod \"977c3806-6d06-4f71-9035-67d813348eb5\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.529671 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnthc\" (UniqueName: \"kubernetes.io/projected/977c3806-6d06-4f71-9035-67d813348eb5-kube-api-access-hnthc\") pod \"977c3806-6d06-4f71-9035-67d813348eb5\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.529696 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-nb\") pod \"977c3806-6d06-4f71-9035-67d813348eb5\" (UID: \"977c3806-6d06-4f71-9035-67d813348eb5\") " Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.540962 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977c3806-6d06-4f71-9035-67d813348eb5-kube-api-access-hnthc" (OuterVolumeSpecName: "kube-api-access-hnthc") pod "977c3806-6d06-4f71-9035-67d813348eb5" (UID: "977c3806-6d06-4f71-9035-67d813348eb5"). InnerVolumeSpecName "kube-api-access-hnthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.567498 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "977c3806-6d06-4f71-9035-67d813348eb5" (UID: "977c3806-6d06-4f71-9035-67d813348eb5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.569338 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "977c3806-6d06-4f71-9035-67d813348eb5" (UID: "977c3806-6d06-4f71-9035-67d813348eb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.573844 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "977c3806-6d06-4f71-9035-67d813348eb5" (UID: "977c3806-6d06-4f71-9035-67d813348eb5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.573987 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-config" (OuterVolumeSpecName: "config") pod "977c3806-6d06-4f71-9035-67d813348eb5" (UID: "977c3806-6d06-4f71-9035-67d813348eb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.631526 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.631558 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnthc\" (UniqueName: \"kubernetes.io/projected/977c3806-6d06-4f71-9035-67d813348eb5-kube-api-access-hnthc\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.631573 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.631582 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.631592 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/977c3806-6d06-4f71-9035-67d813348eb5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.786066 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" event={"ID":"977c3806-6d06-4f71-9035-67d813348eb5","Type":"ContainerDied","Data":"28dc3984f0c2ef8f4455c60ca303c66a8aa73a0509274dd18149d26770d9fd40"} Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.786159 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f78ffb8f-94fw6" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.786557 4680 scope.go:117] "RemoveContainer" containerID="77e0e3f3ab0ff109f7218177effc95172fa2197486b92692fe7d37505e3858af" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.812764 4680 scope.go:117] "RemoveContainer" containerID="d0b34e3ee7b7fca85f3f2640fb94b257ee1118b771aea7a004f9839357b99221" Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.817244 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f78ffb8f-94fw6"] Jan 26 16:24:31 crc kubenswrapper[4680]: I0126 16:24:31.822727 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f78ffb8f-94fw6"] Jan 26 16:24:33 crc kubenswrapper[4680]: I0126 16:24:33.182876 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977c3806-6d06-4f71-9035-67d813348eb5" path="/var/lib/kubelet/pods/977c3806-6d06-4f71-9035-67d813348eb5/volumes" Jan 26 16:24:35 crc kubenswrapper[4680]: I0126 16:24:35.827723 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtsg9" event={"ID":"a97d5f1e-6cd5-4ec0-a10d-203a5c896353","Type":"ContainerStarted","Data":"1263023550be3c449bdc297685577c4ce0eb9a8266ff9c4ff58b2cf537edf70e"} Jan 26 16:24:35 crc kubenswrapper[4680]: I0126 16:24:35.847992 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xtsg9" podStartSLOduration=1.996213505 podStartE2EDuration="31.847975827s" podCreationTimestamp="2026-01-26 16:24:04 +0000 UTC" firstStartedPulling="2026-01-26 16:24:04.903962897 +0000 UTC m=+1120.065235166" lastFinishedPulling="2026-01-26 16:24:34.755725199 +0000 UTC m=+1149.916997488" observedRunningTime="2026-01-26 16:24:35.841966076 +0000 UTC m=+1151.003238345" watchObservedRunningTime="2026-01-26 16:24:35.847975827 +0000 UTC m=+1151.009248096" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.176327 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.491221 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649162 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-l5d8g"] Jan 26 16:24:40 crc kubenswrapper[4680]: E0126 16:24:40.649587 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="init" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649602 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="init" Jan 26 16:24:40 crc kubenswrapper[4680]: E0126 16:24:40.649619 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753e1d64-a470-4d8b-b715-8cc305a976af" containerName="mariadb-account-create-update" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649626 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="753e1d64-a470-4d8b-b715-8cc305a976af" containerName="mariadb-account-create-update" Jan 26 16:24:40 crc kubenswrapper[4680]: E0126 16:24:40.649638 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b36d351-0319-4c91-85cd-aa3dd97415aa" containerName="ovn-config" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649644 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b36d351-0319-4c91-85cd-aa3dd97415aa" containerName="ovn-config" Jan 26 16:24:40 crc kubenswrapper[4680]: E0126 16:24:40.649656 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="dnsmasq-dns" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649662 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="dnsmasq-dns" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649803 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="977c3806-6d06-4f71-9035-67d813348eb5" containerName="dnsmasq-dns" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649822 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="753e1d64-a470-4d8b-b715-8cc305a976af" containerName="mariadb-account-create-update" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.649835 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b36d351-0319-4c91-85cd-aa3dd97415aa" containerName="ovn-config" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.650372 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.671580 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l5d8g"] Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.684380 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c904-account-create-update-gk6cn"] Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.685499 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.689303 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.716935 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c904-account-create-update-gk6cn"] Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.775400 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e20470f-8b23-4f64-8dcb-91eecfedf6be-operator-scripts\") pod \"barbican-db-create-l5d8g\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.775454 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c72e11-8924-4e36-b6f1-6023bea30f11-operator-scripts\") pod \"barbican-c904-account-create-update-gk6cn\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.775515 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pb2\" (UniqueName: \"kubernetes.io/projected/33c72e11-8924-4e36-b6f1-6023bea30f11-kube-api-access-l4pb2\") pod \"barbican-c904-account-create-update-gk6cn\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.775539 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrgw\" (UniqueName: \"kubernetes.io/projected/9e20470f-8b23-4f64-8dcb-91eecfedf6be-kube-api-access-pkrgw\") pod \"barbican-db-create-l5d8g\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.846220 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-87v67"] Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.847264 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-87v67" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.877720 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e20470f-8b23-4f64-8dcb-91eecfedf6be-operator-scripts\") pod \"barbican-db-create-l5d8g\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.877810 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c72e11-8924-4e36-b6f1-6023bea30f11-operator-scripts\") pod \"barbican-c904-account-create-update-gk6cn\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.877923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4pb2\" (UniqueName: \"kubernetes.io/projected/33c72e11-8924-4e36-b6f1-6023bea30f11-kube-api-access-l4pb2\") pod \"barbican-c904-account-create-update-gk6cn\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.877951 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrgw\" (UniqueName: \"kubernetes.io/projected/9e20470f-8b23-4f64-8dcb-91eecfedf6be-kube-api-access-pkrgw\") pod \"barbican-db-create-l5d8g\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.878715 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c72e11-8924-4e36-b6f1-6023bea30f11-operator-scripts\") pod \"barbican-c904-account-create-update-gk6cn\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.879389 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e20470f-8b23-4f64-8dcb-91eecfedf6be-operator-scripts\") pod \"barbican-db-create-l5d8g\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.889253 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-87v67"] Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.943357 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrgw\" (UniqueName: \"kubernetes.io/projected/9e20470f-8b23-4f64-8dcb-91eecfedf6be-kube-api-access-pkrgw\") pod \"barbican-db-create-l5d8g\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.956802 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4pb2\" (UniqueName: \"kubernetes.io/projected/33c72e11-8924-4e36-b6f1-6023bea30f11-kube-api-access-l4pb2\") pod \"barbican-c904-account-create-update-gk6cn\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.979995 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrms\" (UniqueName: \"kubernetes.io/projected/65255cbe-9e75-495f-adc1-048491bf7460-kube-api-access-xdrms\") pod \"heat-db-create-87v67\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " pod="openstack/heat-db-create-87v67" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.980055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65255cbe-9e75-495f-adc1-048491bf7460-operator-scripts\") pod \"heat-db-create-87v67\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " pod="openstack/heat-db-create-87v67" Jan 26 16:24:40 crc kubenswrapper[4680]: I0126 16:24:40.989431 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.011199 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.051280 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lmj6r"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.052608 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.084642 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrms\" (UniqueName: \"kubernetes.io/projected/65255cbe-9e75-495f-adc1-048491bf7460-kube-api-access-xdrms\") pod \"heat-db-create-87v67\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " pod="openstack/heat-db-create-87v67" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.084692 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65255cbe-9e75-495f-adc1-048491bf7460-operator-scripts\") pod \"heat-db-create-87v67\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " pod="openstack/heat-db-create-87v67" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.088052 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lmj6r"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.088477 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65255cbe-9e75-495f-adc1-048491bf7460-operator-scripts\") pod \"heat-db-create-87v67\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " pod="openstack/heat-db-create-87v67" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.127158 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3562-account-create-update-hnrxz"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.128355 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.132988 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.141237 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrms\" (UniqueName: \"kubernetes.io/projected/65255cbe-9e75-495f-adc1-048491bf7460-kube-api-access-xdrms\") pod \"heat-db-create-87v67\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " pod="openstack/heat-db-create-87v67" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.169657 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-87v67" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.244831 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3562-account-create-update-hnrxz"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.244888 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jmjhq"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.246488 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.258718 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9q8n\" (UniqueName: \"kubernetes.io/projected/c63117d0-89f2-4245-9c6b-74052d3d0ef6-kube-api-access-l9q8n\") pod \"cinder-db-create-lmj6r\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.259004 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63117d0-89f2-4245-9c6b-74052d3d0ef6-operator-scripts\") pod \"cinder-db-create-lmj6r\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.267369 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.267823 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.267976 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4fzln" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.268131 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.363960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-operator-scripts\") pod \"cinder-3562-account-create-update-hnrxz\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.364195 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9q8n\" (UniqueName: \"kubernetes.io/projected/c63117d0-89f2-4245-9c6b-74052d3d0ef6-kube-api-access-l9q8n\") pod \"cinder-db-create-lmj6r\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.364318 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63117d0-89f2-4245-9c6b-74052d3d0ef6-operator-scripts\") pod \"cinder-db-create-lmj6r\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.364453 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w888t\" (UniqueName: \"kubernetes.io/projected/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-kube-api-access-w888t\") pod \"cinder-3562-account-create-update-hnrxz\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.365585 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63117d0-89f2-4245-9c6b-74052d3d0ef6-operator-scripts\") pod \"cinder-db-create-lmj6r\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.377713 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jmjhq"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.412281 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-fbd3-account-create-update-bfhvf"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.413759 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.418097 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9q8n\" (UniqueName: \"kubernetes.io/projected/c63117d0-89f2-4245-9c6b-74052d3d0ef6-kube-api-access-l9q8n\") pod \"cinder-db-create-lmj6r\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.420662 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-fbd3-account-create-update-bfhvf"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.427901 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468391 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-config-data\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt82x\" (UniqueName: \"kubernetes.io/projected/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-kube-api-access-tt82x\") pod \"heat-fbd3-account-create-update-bfhvf\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468469 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-operator-scripts\") pod \"heat-fbd3-account-create-update-bfhvf\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468487 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmpvq\" (UniqueName: \"kubernetes.io/projected/0372bc84-8186-4815-8177-8829bed3556f-kube-api-access-jmpvq\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468513 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w888t\" (UniqueName: \"kubernetes.io/projected/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-kube-api-access-w888t\") pod \"cinder-3562-account-create-update-hnrxz\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468537 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-combined-ca-bundle\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.468734 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-operator-scripts\") pod \"cinder-3562-account-create-update-hnrxz\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.469442 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-operator-scripts\") pod \"cinder-3562-account-create-update-hnrxz\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.478850 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.479852 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7qgpz"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.491533 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.509705 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8f25-account-create-update-mpnp7"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.510791 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.516416 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.524828 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w888t\" (UniqueName: \"kubernetes.io/projected/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-kube-api-access-w888t\") pod \"cinder-3562-account-create-update-hnrxz\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.545384 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7qgpz"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.546326 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.566262 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8f25-account-create-update-mpnp7"] Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574033 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt82x\" (UniqueName: \"kubernetes.io/projected/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-kube-api-access-tt82x\") pod \"heat-fbd3-account-create-update-bfhvf\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574403 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fgq\" (UniqueName: \"kubernetes.io/projected/30684708-573a-4266-bc46-77aea415e091-kube-api-access-q2fgq\") pod \"neutron-db-create-7qgpz\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574480 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-operator-scripts\") pod \"heat-fbd3-account-create-update-bfhvf\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574511 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmpvq\" (UniqueName: \"kubernetes.io/projected/0372bc84-8186-4815-8177-8829bed3556f-kube-api-access-jmpvq\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574541 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30684708-573a-4266-bc46-77aea415e091-operator-scripts\") pod \"neutron-db-create-7qgpz\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574578 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-combined-ca-bundle\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574623 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892527eb-f1e6-437d-85a3-2631386f0d55-operator-scripts\") pod \"neutron-8f25-account-create-update-mpnp7\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574873 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-config-data\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.574899 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/892527eb-f1e6-437d-85a3-2631386f0d55-kube-api-access-2vx5q\") pod \"neutron-8f25-account-create-update-mpnp7\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.575905 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-operator-scripts\") pod \"heat-fbd3-account-create-update-bfhvf\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.582829 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-config-data\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.587125 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-combined-ca-bundle\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.618556 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt82x\" (UniqueName: \"kubernetes.io/projected/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-kube-api-access-tt82x\") pod \"heat-fbd3-account-create-update-bfhvf\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.635193 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmpvq\" (UniqueName: \"kubernetes.io/projected/0372bc84-8186-4815-8177-8829bed3556f-kube-api-access-jmpvq\") pod \"keystone-db-sync-jmjhq\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.676197 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fgq\" (UniqueName: \"kubernetes.io/projected/30684708-573a-4266-bc46-77aea415e091-kube-api-access-q2fgq\") pod \"neutron-db-create-7qgpz\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.677335 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30684708-573a-4266-bc46-77aea415e091-operator-scripts\") pod \"neutron-db-create-7qgpz\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.677456 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892527eb-f1e6-437d-85a3-2631386f0d55-operator-scripts\") pod \"neutron-8f25-account-create-update-mpnp7\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.677593 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/892527eb-f1e6-437d-85a3-2631386f0d55-kube-api-access-2vx5q\") pod \"neutron-8f25-account-create-update-mpnp7\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.679603 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30684708-573a-4266-bc46-77aea415e091-operator-scripts\") pod \"neutron-db-create-7qgpz\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.682491 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892527eb-f1e6-437d-85a3-2631386f0d55-operator-scripts\") pod \"neutron-8f25-account-create-update-mpnp7\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.713999 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/892527eb-f1e6-437d-85a3-2631386f0d55-kube-api-access-2vx5q\") pod \"neutron-8f25-account-create-update-mpnp7\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.714245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fgq\" (UniqueName: \"kubernetes.io/projected/30684708-573a-4266-bc46-77aea415e091-kube-api-access-q2fgq\") pod \"neutron-db-create-7qgpz\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.780482 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.853691 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.903405 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:41 crc kubenswrapper[4680]: I0126 16:24:41.910150 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.187459 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c904-account-create-update-gk6cn"] Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.349048 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lmj6r"] Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.367646 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-87v67"] Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.395342 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l5d8g"] Jan 26 16:24:42 crc kubenswrapper[4680]: W0126 16:24:42.413209 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65255cbe_9e75_495f_adc1_048491bf7460.slice/crio-03298b7768040ce9f7b94cb09b138190b6a12637a43358765ff64d2dbcd40c42 WatchSource:0}: Error finding container 03298b7768040ce9f7b94cb09b138190b6a12637a43358765ff64d2dbcd40c42: Status 404 returned error can't find the container with id 03298b7768040ce9f7b94cb09b138190b6a12637a43358765ff64d2dbcd40c42 Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.511540 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3562-account-create-update-hnrxz"] Jan 26 16:24:42 crc kubenswrapper[4680]: W0126 16:24:42.536714 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod201dfb63_7a3b_49b5_a200_e2c9a042e9d0.slice/crio-c0e45dfdf20051addae2b9616502ecbd97531bb22207aba1e65a074cfd510ecd WatchSource:0}: Error finding container c0e45dfdf20051addae2b9616502ecbd97531bb22207aba1e65a074cfd510ecd: Status 404 returned error can't find the container with id c0e45dfdf20051addae2b9616502ecbd97531bb22207aba1e65a074cfd510ecd Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.655407 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-fbd3-account-create-update-bfhvf"] Jan 26 16:24:42 crc kubenswrapper[4680]: W0126 16:24:42.666230 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7d9e2e6_45fa_4255_bd5f_017fa7aacc1a.slice/crio-17032dfeb21ed00b1b457e2a5af639f26eccc168f0ef32229e18c9ab034a5a1b WatchSource:0}: Error finding container 17032dfeb21ed00b1b457e2a5af639f26eccc168f0ef32229e18c9ab034a5a1b: Status 404 returned error can't find the container with id 17032dfeb21ed00b1b457e2a5af639f26eccc168f0ef32229e18c9ab034a5a1b Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.689375 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7qgpz"] Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.791967 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8f25-account-create-update-mpnp7"] Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.822034 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jmjhq"] Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.927971 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8f25-account-create-update-mpnp7" event={"ID":"892527eb-f1e6-437d-85a3-2631386f0d55","Type":"ContainerStarted","Data":"1e24c7611169211981da00257378dd8b0dc8bf36689862dc3e1931b4833250d8"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.933985 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmj6r" event={"ID":"c63117d0-89f2-4245-9c6b-74052d3d0ef6","Type":"ContainerStarted","Data":"bf9ee2c3d33d8c048957094ac58e2d9dd30a650e64921165d973d99d59d8d027"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.934028 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmj6r" event={"ID":"c63117d0-89f2-4245-9c6b-74052d3d0ef6","Type":"ContainerStarted","Data":"d81538d2f3f18df1204eba2f27562a254bd1e9a8db64105a762c88ff0a720137"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.943614 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-87v67" event={"ID":"65255cbe-9e75-495f-adc1-048491bf7460","Type":"ContainerStarted","Data":"7645418050bec1ec8ca84fa251ec29f0c38b0bc7c26ff54ce6124adc6adbb64a"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.943664 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-87v67" event={"ID":"65255cbe-9e75-495f-adc1-048491bf7460","Type":"ContainerStarted","Data":"03298b7768040ce9f7b94cb09b138190b6a12637a43358765ff64d2dbcd40c42"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.947096 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l5d8g" event={"ID":"9e20470f-8b23-4f64-8dcb-91eecfedf6be","Type":"ContainerStarted","Data":"b4389a7886647e0d6d6f6793aeeb90477fd5cf284af7a262ef10cc0f995bf506"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.947124 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l5d8g" event={"ID":"9e20470f-8b23-4f64-8dcb-91eecfedf6be","Type":"ContainerStarted","Data":"fab7f6d660b50523f2c3df747882267e2a156aaa1794c980e75ec8ea8d54f359"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.955364 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-fbd3-account-create-update-bfhvf" event={"ID":"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a","Type":"ContainerStarted","Data":"17032dfeb21ed00b1b457e2a5af639f26eccc168f0ef32229e18c9ab034a5a1b"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.959293 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmjhq" event={"ID":"0372bc84-8186-4815-8177-8829bed3556f","Type":"ContainerStarted","Data":"e8094b946c2a5e4952f91a115dccde5f2c17fcd65b57d97fde0e467751b2b2b5"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.963578 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7qgpz" event={"ID":"30684708-573a-4266-bc46-77aea415e091","Type":"ContainerStarted","Data":"f1c2190d6613de6741210c796ba58b254fbcb08fad36c841bc2c1b6d042c9d19"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.969322 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-lmj6r" podStartSLOduration=1.969303908 podStartE2EDuration="1.969303908s" podCreationTimestamp="2026-01-26 16:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:42.948317362 +0000 UTC m=+1158.109589631" watchObservedRunningTime="2026-01-26 16:24:42.969303908 +0000 UTC m=+1158.130576177" Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.985447 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3562-account-create-update-hnrxz" event={"ID":"201dfb63-7a3b-49b5-a200-e2c9a042e9d0","Type":"ContainerStarted","Data":"36d4563c7a8e7cdd00fab39b4f76494d04eee8a7e08bce62c5a2ebdea44bcd4c"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.985498 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3562-account-create-update-hnrxz" event={"ID":"201dfb63-7a3b-49b5-a200-e2c9a042e9d0","Type":"ContainerStarted","Data":"c0e45dfdf20051addae2b9616502ecbd97531bb22207aba1e65a074cfd510ecd"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.995330 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c904-account-create-update-gk6cn" event={"ID":"33c72e11-8924-4e36-b6f1-6023bea30f11","Type":"ContainerStarted","Data":"1eafed7086d3ff85d779b0bed3ac802a828411109310bf20d483f0b28bb00365"} Jan 26 16:24:42 crc kubenswrapper[4680]: I0126 16:24:42.995375 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c904-account-create-update-gk6cn" event={"ID":"33c72e11-8924-4e36-b6f1-6023bea30f11","Type":"ContainerStarted","Data":"d99921f3df6b1453921cef6b6566704c87ddc9d956025ecfaaccddc5cf31a937"} Jan 26 16:24:43 crc kubenswrapper[4680]: I0126 16:24:43.006997 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-l5d8g" podStartSLOduration=3.006977127 podStartE2EDuration="3.006977127s" podCreationTimestamp="2026-01-26 16:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:42.961943499 +0000 UTC m=+1158.123215768" watchObservedRunningTime="2026-01-26 16:24:43.006977127 +0000 UTC m=+1158.168249396" Jan 26 16:24:43 crc kubenswrapper[4680]: I0126 16:24:43.013128 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-87v67" podStartSLOduration=3.013112831 podStartE2EDuration="3.013112831s" podCreationTimestamp="2026-01-26 16:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:42.981562675 +0000 UTC m=+1158.142834944" watchObservedRunningTime="2026-01-26 16:24:43.013112831 +0000 UTC m=+1158.174385100" Jan 26 16:24:43 crc kubenswrapper[4680]: I0126 16:24:43.022165 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-3562-account-create-update-hnrxz" podStartSLOduration=2.022143107 podStartE2EDuration="2.022143107s" podCreationTimestamp="2026-01-26 16:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:42.999403772 +0000 UTC m=+1158.160676041" watchObservedRunningTime="2026-01-26 16:24:43.022143107 +0000 UTC m=+1158.183415376" Jan 26 16:24:43 crc kubenswrapper[4680]: I0126 16:24:43.050183 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c904-account-create-update-gk6cn" podStartSLOduration=3.050162842 podStartE2EDuration="3.050162842s" podCreationTimestamp="2026-01-26 16:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:43.048975049 +0000 UTC m=+1158.210247338" watchObservedRunningTime="2026-01-26 16:24:43.050162842 +0000 UTC m=+1158.211435111" Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.003382 4680 generic.go:334] "Generic (PLEG): container finished" podID="33c72e11-8924-4e36-b6f1-6023bea30f11" containerID="1eafed7086d3ff85d779b0bed3ac802a828411109310bf20d483f0b28bb00365" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.003724 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c904-account-create-update-gk6cn" event={"ID":"33c72e11-8924-4e36-b6f1-6023bea30f11","Type":"ContainerDied","Data":"1eafed7086d3ff85d779b0bed3ac802a828411109310bf20d483f0b28bb00365"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.006525 4680 generic.go:334] "Generic (PLEG): container finished" podID="9e20470f-8b23-4f64-8dcb-91eecfedf6be" containerID="b4389a7886647e0d6d6f6793aeeb90477fd5cf284af7a262ef10cc0f995bf506" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.006574 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l5d8g" event={"ID":"9e20470f-8b23-4f64-8dcb-91eecfedf6be","Type":"ContainerDied","Data":"b4389a7886647e0d6d6f6793aeeb90477fd5cf284af7a262ef10cc0f995bf506"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.008194 4680 generic.go:334] "Generic (PLEG): container finished" podID="65255cbe-9e75-495f-adc1-048491bf7460" containerID="7645418050bec1ec8ca84fa251ec29f0c38b0bc7c26ff54ce6124adc6adbb64a" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.008281 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-87v67" event={"ID":"65255cbe-9e75-495f-adc1-048491bf7460","Type":"ContainerDied","Data":"7645418050bec1ec8ca84fa251ec29f0c38b0bc7c26ff54ce6124adc6adbb64a"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.009935 4680 generic.go:334] "Generic (PLEG): container finished" podID="892527eb-f1e6-437d-85a3-2631386f0d55" containerID="7eb72cb3be0afd757653526f3486e410a28b4a51e422004a5683c48ba939c43c" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.009980 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8f25-account-create-update-mpnp7" event={"ID":"892527eb-f1e6-437d-85a3-2631386f0d55","Type":"ContainerDied","Data":"7eb72cb3be0afd757653526f3486e410a28b4a51e422004a5683c48ba939c43c"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.011932 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" containerID="21977e0e0b9e6efef08d5ba2066249fa68d73b684ce4d26f8ad529d8d67e6d94" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.011982 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-fbd3-account-create-update-bfhvf" event={"ID":"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a","Type":"ContainerDied","Data":"21977e0e0b9e6efef08d5ba2066249fa68d73b684ce4d26f8ad529d8d67e6d94"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.013615 4680 generic.go:334] "Generic (PLEG): container finished" podID="c63117d0-89f2-4245-9c6b-74052d3d0ef6" containerID="bf9ee2c3d33d8c048957094ac58e2d9dd30a650e64921165d973d99d59d8d027" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.013695 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmj6r" event={"ID":"c63117d0-89f2-4245-9c6b-74052d3d0ef6","Type":"ContainerDied","Data":"bf9ee2c3d33d8c048957094ac58e2d9dd30a650e64921165d973d99d59d8d027"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.014836 4680 generic.go:334] "Generic (PLEG): container finished" podID="30684708-573a-4266-bc46-77aea415e091" containerID="474d8b5d5ee498f2f734fbf00c18e5a83fb983d84edbebe5c426826bd6e1275b" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.014868 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7qgpz" event={"ID":"30684708-573a-4266-bc46-77aea415e091","Type":"ContainerDied","Data":"474d8b5d5ee498f2f734fbf00c18e5a83fb983d84edbebe5c426826bd6e1275b"} Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.016007 4680 generic.go:334] "Generic (PLEG): container finished" podID="201dfb63-7a3b-49b5-a200-e2c9a042e9d0" containerID="36d4563c7a8e7cdd00fab39b4f76494d04eee8a7e08bce62c5a2ebdea44bcd4c" exitCode=0 Jan 26 16:24:44 crc kubenswrapper[4680]: I0126 16:24:44.016036 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3562-account-create-update-hnrxz" event={"ID":"201dfb63-7a3b-49b5-a200-e2c9a042e9d0","Type":"ContainerDied","Data":"36d4563c7a8e7cdd00fab39b4f76494d04eee8a7e08bce62c5a2ebdea44bcd4c"} Jan 26 16:24:45 crc kubenswrapper[4680]: E0126 16:24:45.048485 4680 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.20:50424->38.102.83.20:45165: write tcp 38.102.83.20:50424->38.102.83.20:45165: write: broken pipe Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.802479 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.811103 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-87v67" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.818047 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w888t\" (UniqueName: \"kubernetes.io/projected/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-kube-api-access-w888t\") pod \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.818210 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65255cbe-9e75-495f-adc1-048491bf7460-operator-scripts\") pod \"65255cbe-9e75-495f-adc1-048491bf7460\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.818345 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrms\" (UniqueName: \"kubernetes.io/projected/65255cbe-9e75-495f-adc1-048491bf7460-kube-api-access-xdrms\") pod \"65255cbe-9e75-495f-adc1-048491bf7460\" (UID: \"65255cbe-9e75-495f-adc1-048491bf7460\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.818427 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-operator-scripts\") pod \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\" (UID: \"201dfb63-7a3b-49b5-a200-e2c9a042e9d0\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.824708 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "201dfb63-7a3b-49b5-a200-e2c9a042e9d0" (UID: "201dfb63-7a3b-49b5-a200-e2c9a042e9d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.826896 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65255cbe-9e75-495f-adc1-048491bf7460-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65255cbe-9e75-495f-adc1-048491bf7460" (UID: "65255cbe-9e75-495f-adc1-048491bf7460"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.829864 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-kube-api-access-w888t" (OuterVolumeSpecName: "kube-api-access-w888t") pod "201dfb63-7a3b-49b5-a200-e2c9a042e9d0" (UID: "201dfb63-7a3b-49b5-a200-e2c9a042e9d0"). InnerVolumeSpecName "kube-api-access-w888t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.830013 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.838852 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65255cbe-9e75-495f-adc1-048491bf7460-kube-api-access-xdrms" (OuterVolumeSpecName: "kube-api-access-xdrms") pod "65255cbe-9e75-495f-adc1-048491bf7460" (UID: "65255cbe-9e75-495f-adc1-048491bf7460"). InnerVolumeSpecName "kube-api-access-xdrms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.885210 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.892751 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.903824 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.917043 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924483 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pb2\" (UniqueName: \"kubernetes.io/projected/33c72e11-8924-4e36-b6f1-6023bea30f11-kube-api-access-l4pb2\") pod \"33c72e11-8924-4e36-b6f1-6023bea30f11\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924586 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892527eb-f1e6-437d-85a3-2631386f0d55-operator-scripts\") pod \"892527eb-f1e6-437d-85a3-2631386f0d55\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924668 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c72e11-8924-4e36-b6f1-6023bea30f11-operator-scripts\") pod \"33c72e11-8924-4e36-b6f1-6023bea30f11\" (UID: \"33c72e11-8924-4e36-b6f1-6023bea30f11\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924707 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9q8n\" (UniqueName: \"kubernetes.io/projected/c63117d0-89f2-4245-9c6b-74052d3d0ef6-kube-api-access-l9q8n\") pod \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924822 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63117d0-89f2-4245-9c6b-74052d3d0ef6-operator-scripts\") pod \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\" (UID: \"c63117d0-89f2-4245-9c6b-74052d3d0ef6\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924862 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrgw\" (UniqueName: \"kubernetes.io/projected/9e20470f-8b23-4f64-8dcb-91eecfedf6be-kube-api-access-pkrgw\") pod \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924895 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/892527eb-f1e6-437d-85a3-2631386f0d55-kube-api-access-2vx5q\") pod \"892527eb-f1e6-437d-85a3-2631386f0d55\" (UID: \"892527eb-f1e6-437d-85a3-2631386f0d55\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.924919 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e20470f-8b23-4f64-8dcb-91eecfedf6be-operator-scripts\") pod \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\" (UID: \"9e20470f-8b23-4f64-8dcb-91eecfedf6be\") " Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.925314 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w888t\" (UniqueName: \"kubernetes.io/projected/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-kube-api-access-w888t\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.925342 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65255cbe-9e75-495f-adc1-048491bf7460-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.925354 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrms\" (UniqueName: \"kubernetes.io/projected/65255cbe-9e75-495f-adc1-048491bf7460-kube-api-access-xdrms\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.925366 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201dfb63-7a3b-49b5-a200-e2c9a042e9d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.925813 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e20470f-8b23-4f64-8dcb-91eecfedf6be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e20470f-8b23-4f64-8dcb-91eecfedf6be" (UID: "9e20470f-8b23-4f64-8dcb-91eecfedf6be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.926252 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63117d0-89f2-4245-9c6b-74052d3d0ef6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c63117d0-89f2-4245-9c6b-74052d3d0ef6" (UID: "c63117d0-89f2-4245-9c6b-74052d3d0ef6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.928002 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c72e11-8924-4e36-b6f1-6023bea30f11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33c72e11-8924-4e36-b6f1-6023bea30f11" (UID: "33c72e11-8924-4e36-b6f1-6023bea30f11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.928474 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892527eb-f1e6-437d-85a3-2631386f0d55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "892527eb-f1e6-437d-85a3-2631386f0d55" (UID: "892527eb-f1e6-437d-85a3-2631386f0d55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.933955 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c72e11-8924-4e36-b6f1-6023bea30f11-kube-api-access-l4pb2" (OuterVolumeSpecName: "kube-api-access-l4pb2") pod "33c72e11-8924-4e36-b6f1-6023bea30f11" (UID: "33c72e11-8924-4e36-b6f1-6023bea30f11"). InnerVolumeSpecName "kube-api-access-l4pb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.934167 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63117d0-89f2-4245-9c6b-74052d3d0ef6-kube-api-access-l9q8n" (OuterVolumeSpecName: "kube-api-access-l9q8n") pod "c63117d0-89f2-4245-9c6b-74052d3d0ef6" (UID: "c63117d0-89f2-4245-9c6b-74052d3d0ef6"). InnerVolumeSpecName "kube-api-access-l9q8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.935774 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892527eb-f1e6-437d-85a3-2631386f0d55-kube-api-access-2vx5q" (OuterVolumeSpecName: "kube-api-access-2vx5q") pod "892527eb-f1e6-437d-85a3-2631386f0d55" (UID: "892527eb-f1e6-437d-85a3-2631386f0d55"). InnerVolumeSpecName "kube-api-access-2vx5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.939738 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e20470f-8b23-4f64-8dcb-91eecfedf6be-kube-api-access-pkrgw" (OuterVolumeSpecName: "kube-api-access-pkrgw") pod "9e20470f-8b23-4f64-8dcb-91eecfedf6be" (UID: "9e20470f-8b23-4f64-8dcb-91eecfedf6be"). InnerVolumeSpecName "kube-api-access-pkrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:47 crc kubenswrapper[4680]: I0126 16:24:47.942243 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.025975 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-operator-scripts\") pod \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.026484 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt82x\" (UniqueName: \"kubernetes.io/projected/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-kube-api-access-tt82x\") pod \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\" (UID: \"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a\") " Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.026510 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" (UID: "d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.026634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fgq\" (UniqueName: \"kubernetes.io/projected/30684708-573a-4266-bc46-77aea415e091-kube-api-access-q2fgq\") pod \"30684708-573a-4266-bc46-77aea415e091\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.026696 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30684708-573a-4266-bc46-77aea415e091-operator-scripts\") pod \"30684708-573a-4266-bc46-77aea415e091\" (UID: \"30684708-573a-4266-bc46-77aea415e091\") " Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027558 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63117d0-89f2-4245-9c6b-74052d3d0ef6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027580 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrgw\" (UniqueName: \"kubernetes.io/projected/9e20470f-8b23-4f64-8dcb-91eecfedf6be-kube-api-access-pkrgw\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027598 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/892527eb-f1e6-437d-85a3-2631386f0d55-kube-api-access-2vx5q\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027649 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e20470f-8b23-4f64-8dcb-91eecfedf6be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027663 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4pb2\" (UniqueName: \"kubernetes.io/projected/33c72e11-8924-4e36-b6f1-6023bea30f11-kube-api-access-l4pb2\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027675 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892527eb-f1e6-437d-85a3-2631386f0d55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027686 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c72e11-8924-4e36-b6f1-6023bea30f11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027690 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30684708-573a-4266-bc46-77aea415e091-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30684708-573a-4266-bc46-77aea415e091" (UID: "30684708-573a-4266-bc46-77aea415e091"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027721 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9q8n\" (UniqueName: \"kubernetes.io/projected/c63117d0-89f2-4245-9c6b-74052d3d0ef6-kube-api-access-l9q8n\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.027736 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.029376 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-kube-api-access-tt82x" (OuterVolumeSpecName: "kube-api-access-tt82x") pod "d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" (UID: "d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a"). InnerVolumeSpecName "kube-api-access-tt82x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.031928 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30684708-573a-4266-bc46-77aea415e091-kube-api-access-q2fgq" (OuterVolumeSpecName: "kube-api-access-q2fgq") pod "30684708-573a-4266-bc46-77aea415e091" (UID: "30684708-573a-4266-bc46-77aea415e091"). InnerVolumeSpecName "kube-api-access-q2fgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.051801 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8f25-account-create-update-mpnp7" event={"ID":"892527eb-f1e6-437d-85a3-2631386f0d55","Type":"ContainerDied","Data":"1e24c7611169211981da00257378dd8b0dc8bf36689862dc3e1931b4833250d8"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.051829 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8f25-account-create-update-mpnp7" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.051839 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e24c7611169211981da00257378dd8b0dc8bf36689862dc3e1931b4833250d8" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.052961 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-fbd3-account-create-update-bfhvf" event={"ID":"d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a","Type":"ContainerDied","Data":"17032dfeb21ed00b1b457e2a5af639f26eccc168f0ef32229e18c9ab034a5a1b"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.052980 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17032dfeb21ed00b1b457e2a5af639f26eccc168f0ef32229e18c9ab034a5a1b" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.053033 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fbd3-account-create-update-bfhvf" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.061692 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmjhq" event={"ID":"0372bc84-8186-4815-8177-8829bed3556f","Type":"ContainerStarted","Data":"57e4e5ad58029fa452680c4afefa7d4cc51860d877518a5eea4ea9b38262dda2"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.063705 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-87v67" event={"ID":"65255cbe-9e75-495f-adc1-048491bf7460","Type":"ContainerDied","Data":"03298b7768040ce9f7b94cb09b138190b6a12637a43358765ff64d2dbcd40c42"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.063728 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03298b7768040ce9f7b94cb09b138190b6a12637a43358765ff64d2dbcd40c42" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.063732 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-87v67" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.067247 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmj6r" event={"ID":"c63117d0-89f2-4245-9c6b-74052d3d0ef6","Type":"ContainerDied","Data":"d81538d2f3f18df1204eba2f27562a254bd1e9a8db64105a762c88ff0a720137"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.067285 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d81538d2f3f18df1204eba2f27562a254bd1e9a8db64105a762c88ff0a720137" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.067353 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmj6r" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.071226 4680 generic.go:334] "Generic (PLEG): container finished" podID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" containerID="1263023550be3c449bdc297685577c4ce0eb9a8266ff9c4ff58b2cf537edf70e" exitCode=0 Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.071303 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtsg9" event={"ID":"a97d5f1e-6cd5-4ec0-a10d-203a5c896353","Type":"ContainerDied","Data":"1263023550be3c449bdc297685577c4ce0eb9a8266ff9c4ff58b2cf537edf70e"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.072673 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7qgpz" event={"ID":"30684708-573a-4266-bc46-77aea415e091","Type":"ContainerDied","Data":"f1c2190d6613de6741210c796ba58b254fbcb08fad36c841bc2c1b6d042c9d19"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.072696 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c2190d6613de6741210c796ba58b254fbcb08fad36c841bc2c1b6d042c9d19" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.072733 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7qgpz" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.078248 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3562-account-create-update-hnrxz" event={"ID":"201dfb63-7a3b-49b5-a200-e2c9a042e9d0","Type":"ContainerDied","Data":"c0e45dfdf20051addae2b9616502ecbd97531bb22207aba1e65a074cfd510ecd"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.078312 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e45dfdf20051addae2b9616502ecbd97531bb22207aba1e65a074cfd510ecd" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.078379 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3562-account-create-update-hnrxz" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.081400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c904-account-create-update-gk6cn" event={"ID":"33c72e11-8924-4e36-b6f1-6023bea30f11","Type":"ContainerDied","Data":"d99921f3df6b1453921cef6b6566704c87ddc9d956025ecfaaccddc5cf31a937"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.081475 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99921f3df6b1453921cef6b6566704c87ddc9d956025ecfaaccddc5cf31a937" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.081440 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c904-account-create-update-gk6cn" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.084952 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l5d8g" event={"ID":"9e20470f-8b23-4f64-8dcb-91eecfedf6be","Type":"ContainerDied","Data":"fab7f6d660b50523f2c3df747882267e2a156aaa1794c980e75ec8ea8d54f359"} Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.084985 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab7f6d660b50523f2c3df747882267e2a156aaa1794c980e75ec8ea8d54f359" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.085046 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l5d8g" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.130039 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fgq\" (UniqueName: \"kubernetes.io/projected/30684708-573a-4266-bc46-77aea415e091-kube-api-access-q2fgq\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.133108 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30684708-573a-4266-bc46-77aea415e091-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:48 crc kubenswrapper[4680]: I0126 16:24:48.134166 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt82x\" (UniqueName: \"kubernetes.io/projected/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a-kube-api-access-tt82x\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.112829 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jmjhq" podStartSLOduration=3.257193717 podStartE2EDuration="8.112812777s" podCreationTimestamp="2026-01-26 16:24:41 +0000 UTC" firstStartedPulling="2026-01-26 16:24:42.806684012 +0000 UTC m=+1157.967956281" lastFinishedPulling="2026-01-26 16:24:47.662303082 +0000 UTC m=+1162.823575341" observedRunningTime="2026-01-26 16:24:49.111931012 +0000 UTC m=+1164.273203301" watchObservedRunningTime="2026-01-26 16:24:49.112812777 +0000 UTC m=+1164.274085046" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.446897 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.573686 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-config-data\") pod \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.573745 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-db-sync-config-data\") pod \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.573925 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-combined-ca-bundle\") pod \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.573979 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qkxd\" (UniqueName: \"kubernetes.io/projected/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-kube-api-access-5qkxd\") pod \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\" (UID: \"a97d5f1e-6cd5-4ec0-a10d-203a5c896353\") " Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.580677 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a97d5f1e-6cd5-4ec0-a10d-203a5c896353" (UID: "a97d5f1e-6cd5-4ec0-a10d-203a5c896353"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.585482 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-kube-api-access-5qkxd" (OuterVolumeSpecName: "kube-api-access-5qkxd") pod "a97d5f1e-6cd5-4ec0-a10d-203a5c896353" (UID: "a97d5f1e-6cd5-4ec0-a10d-203a5c896353"). InnerVolumeSpecName "kube-api-access-5qkxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.603909 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a97d5f1e-6cd5-4ec0-a10d-203a5c896353" (UID: "a97d5f1e-6cd5-4ec0-a10d-203a5c896353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.632367 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-config-data" (OuterVolumeSpecName: "config-data") pod "a97d5f1e-6cd5-4ec0-a10d-203a5c896353" (UID: "a97d5f1e-6cd5-4ec0-a10d-203a5c896353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.676443 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qkxd\" (UniqueName: \"kubernetes.io/projected/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-kube-api-access-5qkxd\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.676664 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.676734 4680 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:49 crc kubenswrapper[4680]: I0126 16:24:49.676795 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97d5f1e-6cd5-4ec0-a10d-203a5c896353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.099286 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtsg9" event={"ID":"a97d5f1e-6cd5-4ec0-a10d-203a5c896353","Type":"ContainerDied","Data":"2079492d414d31b965ba630f9b3e75df3d5600fb630be3d3943a49d6662f0926"} Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.099579 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2079492d414d31b965ba630f9b3e75df3d5600fb630be3d3943a49d6662f0926" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.099334 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtsg9" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449292 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59f5467799-8xtsv"] Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449590 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63117d0-89f2-4245-9c6b-74052d3d0ef6" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449601 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63117d0-89f2-4245-9c6b-74052d3d0ef6" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449612 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c72e11-8924-4e36-b6f1-6023bea30f11" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449618 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c72e11-8924-4e36-b6f1-6023bea30f11" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449630 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201dfb63-7a3b-49b5-a200-e2c9a042e9d0" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449636 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="201dfb63-7a3b-49b5-a200-e2c9a042e9d0" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449647 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30684708-573a-4266-bc46-77aea415e091" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449652 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="30684708-573a-4266-bc46-77aea415e091" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449663 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65255cbe-9e75-495f-adc1-048491bf7460" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449669 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="65255cbe-9e75-495f-adc1-048491bf7460" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449679 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892527eb-f1e6-437d-85a3-2631386f0d55" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449685 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="892527eb-f1e6-437d-85a3-2631386f0d55" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449698 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449704 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449714 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" containerName="glance-db-sync" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449720 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" containerName="glance-db-sync" Jan 26 16:24:50 crc kubenswrapper[4680]: E0126 16:24:50.449739 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e20470f-8b23-4f64-8dcb-91eecfedf6be" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449744 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e20470f-8b23-4f64-8dcb-91eecfedf6be" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449909 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449929 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="65255cbe-9e75-495f-adc1-048491bf7460" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449940 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" containerName="glance-db-sync" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449949 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e20470f-8b23-4f64-8dcb-91eecfedf6be" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449959 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="201dfb63-7a3b-49b5-a200-e2c9a042e9d0" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449978 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="892527eb-f1e6-437d-85a3-2631386f0d55" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.449990 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63117d0-89f2-4245-9c6b-74052d3d0ef6" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.450000 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c72e11-8924-4e36-b6f1-6023bea30f11" containerName="mariadb-account-create-update" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.450008 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="30684708-573a-4266-bc46-77aea415e091" containerName="mariadb-database-create" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.454061 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.468750 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f5467799-8xtsv"] Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.594866 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-swift-storage-0\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.594930 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.594979 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-nb\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.595050 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbshh\" (UniqueName: \"kubernetes.io/projected/efbf7006-bf18-491a-a3cf-0629ff14e71e-kube-api-access-lbshh\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.595129 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-svc\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.595152 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-config\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.696625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-svc\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.696669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-config\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.696746 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-swift-storage-0\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.696994 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.697050 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-nb\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.697169 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbshh\" (UniqueName: \"kubernetes.io/projected/efbf7006-bf18-491a-a3cf-0629ff14e71e-kube-api-access-lbshh\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.698874 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-svc\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.699272 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-nb\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.699920 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-config\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.699991 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.700159 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-swift-storage-0\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.733734 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbshh\" (UniqueName: \"kubernetes.io/projected/efbf7006-bf18-491a-a3cf-0629ff14e71e-kube-api-access-lbshh\") pod \"dnsmasq-dns-59f5467799-8xtsv\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:50 crc kubenswrapper[4680]: I0126 16:24:50.791188 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:51 crc kubenswrapper[4680]: I0126 16:24:51.352081 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f5467799-8xtsv"] Jan 26 16:24:52 crc kubenswrapper[4680]: I0126 16:24:52.115533 4680 generic.go:334] "Generic (PLEG): container finished" podID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerID="a8f8d40789ea696aed29f45071f4153287806988e52cb2c21637e72d48ff10d3" exitCode=0 Jan 26 16:24:52 crc kubenswrapper[4680]: I0126 16:24:52.115970 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" event={"ID":"efbf7006-bf18-491a-a3cf-0629ff14e71e","Type":"ContainerDied","Data":"a8f8d40789ea696aed29f45071f4153287806988e52cb2c21637e72d48ff10d3"} Jan 26 16:24:52 crc kubenswrapper[4680]: I0126 16:24:52.116002 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" event={"ID":"efbf7006-bf18-491a-a3cf-0629ff14e71e","Type":"ContainerStarted","Data":"e7c9bdd98fe4699d86213fc965d0798f239ea16c893260e18c96134ffacbba6c"} Jan 26 16:24:52 crc kubenswrapper[4680]: I0126 16:24:52.125193 4680 generic.go:334] "Generic (PLEG): container finished" podID="0372bc84-8186-4815-8177-8829bed3556f" containerID="57e4e5ad58029fa452680c4afefa7d4cc51860d877518a5eea4ea9b38262dda2" exitCode=0 Jan 26 16:24:52 crc kubenswrapper[4680]: I0126 16:24:52.125241 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmjhq" event={"ID":"0372bc84-8186-4815-8177-8829bed3556f","Type":"ContainerDied","Data":"57e4e5ad58029fa452680c4afefa7d4cc51860d877518a5eea4ea9b38262dda2"} Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.137173 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" event={"ID":"efbf7006-bf18-491a-a3cf-0629ff14e71e","Type":"ContainerStarted","Data":"ee4d3639e2735156af539948741f01ab8570ed4979e9a1189b31ed9751cb1490"} Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.137523 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.162498 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" podStartSLOduration=3.162480236 podStartE2EDuration="3.162480236s" podCreationTimestamp="2026-01-26 16:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:53.159289855 +0000 UTC m=+1168.320562114" watchObservedRunningTime="2026-01-26 16:24:53.162480236 +0000 UTC m=+1168.323752505" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.515548 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.657086 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-combined-ca-bundle\") pod \"0372bc84-8186-4815-8177-8829bed3556f\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.657147 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmpvq\" (UniqueName: \"kubernetes.io/projected/0372bc84-8186-4815-8177-8829bed3556f-kube-api-access-jmpvq\") pod \"0372bc84-8186-4815-8177-8829bed3556f\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.657248 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-config-data\") pod \"0372bc84-8186-4815-8177-8829bed3556f\" (UID: \"0372bc84-8186-4815-8177-8829bed3556f\") " Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.661682 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0372bc84-8186-4815-8177-8829bed3556f-kube-api-access-jmpvq" (OuterVolumeSpecName: "kube-api-access-jmpvq") pod "0372bc84-8186-4815-8177-8829bed3556f" (UID: "0372bc84-8186-4815-8177-8829bed3556f"). InnerVolumeSpecName "kube-api-access-jmpvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.680451 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0372bc84-8186-4815-8177-8829bed3556f" (UID: "0372bc84-8186-4815-8177-8829bed3556f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.704577 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-config-data" (OuterVolumeSpecName: "config-data") pod "0372bc84-8186-4815-8177-8829bed3556f" (UID: "0372bc84-8186-4815-8177-8829bed3556f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.758800 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.759096 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmpvq\" (UniqueName: \"kubernetes.io/projected/0372bc84-8186-4815-8177-8829bed3556f-kube-api-access-jmpvq\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:53 crc kubenswrapper[4680]: I0126 16:24:53.759190 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0372bc84-8186-4815-8177-8829bed3556f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.144632 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmjhq" event={"ID":"0372bc84-8186-4815-8177-8829bed3556f","Type":"ContainerDied","Data":"e8094b946c2a5e4952f91a115dccde5f2c17fcd65b57d97fde0e467751b2b2b5"} Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.145091 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8094b946c2a5e4952f91a115dccde5f2c17fcd65b57d97fde0e467751b2b2b5" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.144672 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmjhq" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.454283 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5467799-8xtsv"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.489204 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6687f9ccf9-fp8r2"] Jan 26 16:24:54 crc kubenswrapper[4680]: E0126 16:24:54.489681 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0372bc84-8186-4815-8177-8829bed3556f" containerName="keystone-db-sync" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.489755 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0372bc84-8186-4815-8177-8829bed3556f" containerName="keystone-db-sync" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.489970 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0372bc84-8186-4815-8177-8829bed3556f" containerName="keystone-db-sync" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.490832 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.510501 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6687f9ccf9-fp8r2"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.568981 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jzx79"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.570216 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.571218 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-swift-storage-0\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.571261 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-nb\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.571338 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-svc\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.571859 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nz8\" (UniqueName: \"kubernetes.io/projected/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-kube-api-access-f7nz8\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.571926 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-sb\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.571952 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-config\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.573962 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.574214 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.574378 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.575157 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4fzln" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.575324 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.615203 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jzx79"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.673933 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-svc\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.673986 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7nz8\" (UniqueName: \"kubernetes.io/projected/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-kube-api-access-f7nz8\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674024 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6k2z\" (UniqueName: \"kubernetes.io/projected/a4dda9df-c554-4d8f-ad69-782e8847d0b6-kube-api-access-n6k2z\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674048 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-sb\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674091 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-config\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-config-data\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674143 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-scripts\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674177 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-combined-ca-bundle\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-credential-keys\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674275 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-fernet-keys\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674299 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-swift-storage-0\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674321 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-nb\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674853 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-svc\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674883 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-sb\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.674994 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-config\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.675086 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-nb\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.675254 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-swift-storage-0\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.722422 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7nz8\" (UniqueName: \"kubernetes.io/projected/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-kube-api-access-f7nz8\") pod \"dnsmasq-dns-6687f9ccf9-fp8r2\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.730229 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zpnh8"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.732596 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.737395 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.738247 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-jtbml" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.764608 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zpnh8"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.775953 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-fernet-keys\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.776047 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6k2z\" (UniqueName: \"kubernetes.io/projected/a4dda9df-c554-4d8f-ad69-782e8847d0b6-kube-api-access-n6k2z\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.776110 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-config-data\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.776133 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-scripts\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.776174 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-combined-ca-bundle\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.776209 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-credential-keys\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.787817 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-scripts\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.792564 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-credential-keys\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.793677 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-fernet-keys\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.798850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-combined-ca-bundle\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.817035 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.819994 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-config-data\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.857302 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6k2z\" (UniqueName: \"kubernetes.io/projected/a4dda9df-c554-4d8f-ad69-782e8847d0b6-kube-api-access-n6k2z\") pod \"keystone-bootstrap-jzx79\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.877968 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-config-data\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.878318 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grrfv\" (UniqueName: \"kubernetes.io/projected/a78a7e79-9fe8-46b7-a137-2be924f24935-kube-api-access-grrfv\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.878412 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-combined-ca-bundle\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.890454 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.911178 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f567955f7-8qzq9"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.912489 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.935565 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f567955f7-8qzq9"] Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.935708 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.935939 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lk242" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.936101 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.944183 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981202 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q26hh\" (UniqueName: \"kubernetes.io/projected/e2fa8451-2e90-43cf-aedc-85cad61a60c4-kube-api-access-q26hh\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981423 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2fa8451-2e90-43cf-aedc-85cad61a60c4-logs\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981503 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-config-data\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981551 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-config-data\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981568 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2fa8451-2e90-43cf-aedc-85cad61a60c4-horizon-secret-key\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grrfv\" (UniqueName: \"kubernetes.io/projected/a78a7e79-9fe8-46b7-a137-2be924f24935-kube-api-access-grrfv\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-combined-ca-bundle\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.981752 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-scripts\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:54 crc kubenswrapper[4680]: I0126 16:24:54.990796 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-combined-ca-bundle\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.005890 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-config-data\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.050769 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grrfv\" (UniqueName: \"kubernetes.io/projected/a78a7e79-9fe8-46b7-a137-2be924f24935-kube-api-access-grrfv\") pod \"heat-db-sync-zpnh8\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.054967 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-84jft"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.056173 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.060861 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zpnh8" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.068415 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-84jft"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.074960 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.075151 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.075252 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-k4jj2" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.085121 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-scripts\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.085191 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q26hh\" (UniqueName: \"kubernetes.io/projected/e2fa8451-2e90-43cf-aedc-85cad61a60c4-kube-api-access-q26hh\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.085266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2fa8451-2e90-43cf-aedc-85cad61a60c4-logs\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.085312 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-config-data\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.085338 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2fa8451-2e90-43cf-aedc-85cad61a60c4-horizon-secret-key\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.086834 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-scripts\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.087215 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2fa8451-2e90-43cf-aedc-85cad61a60c4-logs\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.088102 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-config-data\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.089519 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2fa8451-2e90-43cf-aedc-85cad61a60c4-horizon-secret-key\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.122540 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.124415 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.137958 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.138208 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.156735 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h9tvh"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.157965 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.163139 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-78kpg" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.167611 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.170038 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q26hh\" (UniqueName: \"kubernetes.io/projected/e2fa8451-2e90-43cf-aedc-85cad61a60c4-kube-api-access-q26hh\") pod \"horizon-f567955f7-8qzq9\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187731 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-log-httpd\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187757 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187788 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4fnm\" (UniqueName: \"kubernetes.io/projected/115b3524-df91-4565-9f2f-c345931095f4-kube-api-access-f4fnm\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187805 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-scripts\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187824 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-config-data\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187854 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-config\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187869 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrrm\" (UniqueName: \"kubernetes.io/projected/bbd801f9-47d9-4d25-8809-c923b39525bf-kube-api-access-fvrrm\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.187988 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-run-httpd\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.188045 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-combined-ca-bundle\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.188080 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.199620 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerName="dnsmasq-dns" containerID="cri-o://ee4d3639e2735156af539948741f01ab8570ed4979e9a1189b31ed9751cb1490" gracePeriod=10 Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.211187 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.290580 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxtx\" (UniqueName: \"kubernetes.io/projected/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-kube-api-access-jhxtx\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.290942 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-log-httpd\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.290965 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.290989 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4fnm\" (UniqueName: \"kubernetes.io/projected/115b3524-df91-4565-9f2f-c345931095f4-kube-api-access-f4fnm\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291010 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-scripts\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291029 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-config-data\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291052 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-config\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291084 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvrrm\" (UniqueName: \"kubernetes.io/projected/bbd801f9-47d9-4d25-8809-c923b39525bf-kube-api-access-fvrrm\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291113 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-db-sync-config-data\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291218 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-run-httpd\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291244 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-combined-ca-bundle\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291348 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-combined-ca-bundle\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.291383 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.293166 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.295996 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-log-httpd\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.299405 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-run-httpd\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.314990 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h9tvh"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.331853 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-scripts\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.332773 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-config-data\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.340963 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.346697 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-config\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.353686 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8b6qn"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.364912 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.370924 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.372019 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4fnm\" (UniqueName: \"kubernetes.io/projected/115b3524-df91-4565-9f2f-c345931095f4-kube-api-access-f4fnm\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.372774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.377516 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-combined-ca-bundle\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.425362 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.430896 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvrrm\" (UniqueName: \"kubernetes.io/projected/bbd801f9-47d9-4d25-8809-c923b39525bf-kube-api-access-fvrrm\") pod \"neutron-db-sync-84jft\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.432400 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-84jft" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.447409 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65zjq" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.459394 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.459629 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wm89s" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.459753 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.459909 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.460043 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.460192 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.463397 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.467827 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-combined-ca-bundle\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.468038 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxtx\" (UniqueName: \"kubernetes.io/projected/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-kube-api-access-jhxtx\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.468132 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-db-sync-config-data\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.476316 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.492368 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-db-sync-config-data\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.492371 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-combined-ca-bundle\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.506642 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8b6qn"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.519646 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxtx\" (UniqueName: \"kubernetes.io/projected/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-kube-api-access-jhxtx\") pod \"barbican-db-sync-h9tvh\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.552623 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d49dbb4df-cwsvx"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.554502 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576775 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576835 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-scripts\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576854 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576881 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576903 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-config-data\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576971 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59df103d-c023-42a1-8e2c-f262d023d232-etc-machine-id\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.576992 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-config-data\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.577010 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/115a0140-2fa7-40d4-aadf-be6181fd2244-kube-api-access-p7z9s\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.577025 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-scripts\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.577085 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-db-sync-config-data\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.577106 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlq56\" (UniqueName: \"kubernetes.io/projected/59df103d-c023-42a1-8e2c-f262d023d232-kube-api-access-wlq56\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.577130 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-logs\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.577155 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-combined-ca-bundle\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.601599 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d49dbb4df-cwsvx"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.636100 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kjtk7"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.637148 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.649381 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.649702 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jq6r7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.654383 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.657648 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kjtk7"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.678539 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6687f9ccf9-fp8r2"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.681941 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-db-sync-config-data\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.681992 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlq56\" (UniqueName: \"kubernetes.io/projected/59df103d-c023-42a1-8e2c-f262d023d232-kube-api-access-wlq56\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682027 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-logs\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682055 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-combined-ca-bundle\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682095 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682120 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-config-data\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682154 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-scripts\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682188 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682304 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682383 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682479 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-config-data\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682539 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3c8f55-5256-479d-a7e8-3b42ec63414c-logs\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682578 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77v27\" (UniqueName: \"kubernetes.io/projected/2b3c8f55-5256-479d-a7e8-3b42ec63414c-kube-api-access-77v27\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59df103d-c023-42a1-8e2c-f262d023d232-etc-machine-id\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682638 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-scripts\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682658 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b3c8f55-5256-479d-a7e8-3b42ec63414c-horizon-secret-key\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682713 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-config-data\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682738 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/115a0140-2fa7-40d4-aadf-be6181fd2244-kube-api-access-p7z9s\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.682755 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-scripts\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.686117 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-logs\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.687588 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.689370 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59df103d-c023-42a1-8e2c-f262d023d232-etc-machine-id\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.704606 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.704772 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-788d7bbc75-7s7n4"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.706901 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-scripts\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.708674 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-combined-ca-bundle\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.710050 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-scripts\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.711179 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.735484 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-db-sync-config-data\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.735550 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-788d7bbc75-7s7n4"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.750624 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-config-data\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.770885 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.771023 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.771768 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-config-data\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793631 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-nb\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793709 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-config-data\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793756 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-config\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793779 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-logs\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793794 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-sb\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793820 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-combined-ca-bundle\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793844 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jwt\" (UniqueName: \"kubernetes.io/projected/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-kube-api-access-52jwt\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793864 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzw8\" (UniqueName: \"kubernetes.io/projected/a72e4213-72e8-4a99-9863-fe63708b3f22-kube-api-access-kgzw8\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793883 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3c8f55-5256-479d-a7e8-3b42ec63414c-logs\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793905 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77v27\" (UniqueName: \"kubernetes.io/projected/2b3c8f55-5256-479d-a7e8-3b42ec63414c-kube-api-access-77v27\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793921 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-swift-storage-0\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793937 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-scripts\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793957 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b3c8f55-5256-479d-a7e8-3b42ec63414c-horizon-secret-key\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.793973 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-scripts\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.794013 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-config-data\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.794034 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-svc\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.795558 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-config-data\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.795793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3c8f55-5256-479d-a7e8-3b42ec63414c-logs\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.796707 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-scripts\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.801777 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.814322 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlq56\" (UniqueName: \"kubernetes.io/projected/59df103d-c023-42a1-8e2c-f262d023d232-kube-api-access-wlq56\") pod \"cinder-db-sync-8b6qn\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.818623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77v27\" (UniqueName: \"kubernetes.io/projected/2b3c8f55-5256-479d-a7e8-3b42ec63414c-kube-api-access-77v27\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.821517 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b3c8f55-5256-479d-a7e8-3b42ec63414c-horizon-secret-key\") pod \"horizon-5d49dbb4df-cwsvx\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.833708 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/115a0140-2fa7-40d4-aadf-be6181fd2244-kube-api-access-p7z9s\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.850971 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.862164 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.865586 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.865737 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.888820 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " pod="openstack/glance-default-external-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.897998 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-config\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910412 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-logs\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910499 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-sb\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910592 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910670 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-combined-ca-bundle\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910753 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-logs\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52jwt\" (UniqueName: \"kubernetes.io/projected/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-kube-api-access-52jwt\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.910926 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgzw8\" (UniqueName: \"kubernetes.io/projected/a72e4213-72e8-4a99-9863-fe63708b3f22-kube-api-access-kgzw8\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911028 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-swift-storage-0\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911126 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911207 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911287 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-scripts\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911465 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911555 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-config-data\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911633 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-svc\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911711 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckx7p\" (UniqueName: \"kubernetes.io/projected/465791a7-75b4-4168-8fe1-c535ecdd8ed9-kube-api-access-ckx7p\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911802 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-nb\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.911907 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.898968 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-config\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.912520 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.913517 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-sb\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.913857 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-logs\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.914575 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-svc\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.915689 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-swift-storage-0\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.916058 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-nb\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.923906 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-combined-ca-bundle\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.929452 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-scripts\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.945860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52jwt\" (UniqueName: \"kubernetes.io/projected/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-kube-api-access-52jwt\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.946866 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-config-data\") pod \"placement-db-sync-kjtk7\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.956560 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:24:55 crc kubenswrapper[4680]: I0126 16:24:55.972055 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kjtk7" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.018650 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgzw8\" (UniqueName: \"kubernetes.io/projected/a72e4213-72e8-4a99-9863-fe63708b3f22-kube-api-access-kgzw8\") pod \"dnsmasq-dns-788d7bbc75-7s7n4\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.019811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.019853 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-logs\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.019893 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.019909 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.019954 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.019973 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.020000 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckx7p\" (UniqueName: \"kubernetes.io/projected/465791a7-75b4-4168-8fe1-c535ecdd8ed9-kube-api-access-ckx7p\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.020037 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.020280 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.022631 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.036544 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-logs\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.075695 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckx7p\" (UniqueName: \"kubernetes.io/projected/465791a7-75b4-4168-8fe1-c535ecdd8ed9-kube-api-access-ckx7p\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.105901 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.134305 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.164584 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.165250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.168273 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.168614 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.174185 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.290942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" event={"ID":"ef89abe9-d7a7-4498-9e71-6103a1ebfb80","Type":"ContainerStarted","Data":"57382cfa8229ffc035f34507508d8d2fd710932d3ca967d4f9f8744dd5a98880"} Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.296808 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.298315 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6687f9ccf9-fp8r2"] Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.358814 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.460310 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zpnh8"] Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.656132 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f567955f7-8qzq9"] Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.941835 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.957277 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-84jft"] Jan 26 16:24:56 crc kubenswrapper[4680]: I0126 16:24:56.979596 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jzx79"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.307639 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h9tvh"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.336337 4680 generic.go:334] "Generic (PLEG): container finished" podID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerID="ee4d3639e2735156af539948741f01ab8570ed4979e9a1189b31ed9751cb1490" exitCode=0 Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.336451 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" event={"ID":"efbf7006-bf18-491a-a3cf-0629ff14e71e","Type":"ContainerDied","Data":"ee4d3639e2735156af539948741f01ab8570ed4979e9a1189b31ed9751cb1490"} Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.337217 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8b6qn"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.343302 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f567955f7-8qzq9" event={"ID":"e2fa8451-2e90-43cf-aedc-85cad61a60c4","Type":"ContainerStarted","Data":"2bb0b648307fe3259ec862a480a324aa949394b9ef6e5d1e69b1ae923b9b80da"} Jan 26 16:24:57 crc kubenswrapper[4680]: W0126 16:24:57.364897 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59df103d_c023_42a1_8e2c_f262d023d232.slice/crio-c65731b044bced1ef8f3b729977361f09444cae2792c85001f8dc45c42d5dd7c WatchSource:0}: Error finding container c65731b044bced1ef8f3b729977361f09444cae2792c85001f8dc45c42d5dd7c: Status 404 returned error can't find the container with id c65731b044bced1ef8f3b729977361f09444cae2792c85001f8dc45c42d5dd7c Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.368780 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kjtk7"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.387313 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jzx79" event={"ID":"a4dda9df-c554-4d8f-ad69-782e8847d0b6","Type":"ContainerStarted","Data":"ddf920ec57aa7bc24dbf69dc017a5ec3b081b0b823af97ccc002445f581e0a63"} Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.401366 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zpnh8" event={"ID":"a78a7e79-9fe8-46b7-a137-2be924f24935","Type":"ContainerStarted","Data":"e0b8dd2c7e5149560390857290eb6af59d7bcab0858808f678342830ccc8bdeb"} Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.412869 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-84jft" event={"ID":"bbd801f9-47d9-4d25-8809-c923b39525bf","Type":"ContainerStarted","Data":"0f732837c7ccc143d0591247043ab5a635fa78b85aa2fb48bc76dbad27e78a74"} Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.415199 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-788d7bbc75-7s7n4"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.415224 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerStarted","Data":"e1b8d1f40c953f3b8592242dfe887b98e58ecfcf08ed7406a2fad57e9b7388d2"} Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.417016 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" event={"ID":"ef89abe9-d7a7-4498-9e71-6103a1ebfb80","Type":"ContainerStarted","Data":"839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f"} Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.463410 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d49dbb4df-cwsvx"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.510412 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.694446 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:24:57 crc kubenswrapper[4680]: I0126 16:24:57.936786 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.103792 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f567955f7-8qzq9"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.123683 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.159774 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cb88c9957-lvzdd"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.161102 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.231844 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb88c9957-lvzdd"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.245586 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-scripts\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.245632 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d52a893-b89a-4ee1-b056-78a94a87ac96-horizon-secret-key\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.245720 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-config-data\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.245762 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvr74\" (UniqueName: \"kubernetes.io/projected/8d52a893-b89a-4ee1-b056-78a94a87ac96-kube-api-access-qvr74\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.245819 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d52a893-b89a-4ee1-b056-78a94a87ac96-logs\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.353401 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-config-data\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.358406 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvr74\" (UniqueName: \"kubernetes.io/projected/8d52a893-b89a-4ee1-b056-78a94a87ac96-kube-api-access-qvr74\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.359937 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d52a893-b89a-4ee1-b056-78a94a87ac96-logs\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.360095 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-scripts\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.360156 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d52a893-b89a-4ee1-b056-78a94a87ac96-horizon-secret-key\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.358309 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-config-data\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.363519 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d52a893-b89a-4ee1-b056-78a94a87ac96-logs\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.364197 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-scripts\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.367780 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d52a893-b89a-4ee1-b056-78a94a87ac96-horizon-secret-key\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.410906 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.411680 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvr74\" (UniqueName: \"kubernetes.io/projected/8d52a893-b89a-4ee1-b056-78a94a87ac96-kube-api-access-qvr74\") pod \"horizon-6cb88c9957-lvzdd\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.418702 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.484766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49dbb4df-cwsvx" event={"ID":"2b3c8f55-5256-479d-a7e8-3b42ec63414c","Type":"ContainerStarted","Data":"395d482a73b38497ac9e9f01132d392348c4a222cbbe498337f73518f9ea34c6"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.486055 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.487644 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h9tvh" event={"ID":"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c","Type":"ContainerStarted","Data":"595a6078c809cda08ccc79bfbf1131a2a15f71e66f61f98300f567e3f0b767f4"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.490219 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.493720 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" event={"ID":"a72e4213-72e8-4a99-9863-fe63708b3f22","Type":"ContainerStarted","Data":"088bd2cc5078d4e6f8b72a1f1a016c9ad8dc7dc103648ed24e77a42ae60b6247"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.498297 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kjtk7" event={"ID":"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf","Type":"ContainerStarted","Data":"5e913656fd96c82b00e683ed15081714077c2a46fc5aa67a65938d8803009e24"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.500291 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jzx79" event={"ID":"a4dda9df-c554-4d8f-ad69-782e8847d0b6","Type":"ContainerStarted","Data":"fb6d9f8b82ea74bdd046ee4d42e97faac01d44e12013a3e1eda691efb47209b1"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.506947 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-84jft" event={"ID":"bbd801f9-47d9-4d25-8809-c923b39525bf","Type":"ContainerStarted","Data":"d99b813d199e891b01e73232a4ada37d92016357cd47ca60c98e49cb5f5888a6"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.512205 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8b6qn" event={"ID":"59df103d-c023-42a1-8e2c-f262d023d232","Type":"ContainerStarted","Data":"c65731b044bced1ef8f3b729977361f09444cae2792c85001f8dc45c42d5dd7c"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.533900 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465791a7-75b4-4168-8fe1-c535ecdd8ed9","Type":"ContainerStarted","Data":"96c0cc2aaabe4c1725f3db62ab23af7d0669f1ee38af0d0c817add91bedbcb9d"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.536311 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef89abe9-d7a7-4498-9e71-6103a1ebfb80" containerID="839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f" exitCode=0 Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.536403 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" event={"ID":"ef89abe9-d7a7-4498-9e71-6103a1ebfb80","Type":"ContainerDied","Data":"839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.536435 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" event={"ID":"ef89abe9-d7a7-4498-9e71-6103a1ebfb80","Type":"ContainerDied","Data":"57382cfa8229ffc035f34507508d8d2fd710932d3ca967d4f9f8744dd5a98880"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.536455 4680 scope.go:117] "RemoveContainer" containerID="839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.536486 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6687f9ccf9-fp8r2" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.545359 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jzx79" podStartSLOduration=4.545342599 podStartE2EDuration="4.545342599s" podCreationTimestamp="2026-01-26 16:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:58.543941569 +0000 UTC m=+1173.705213868" watchObservedRunningTime="2026-01-26 16:24:58.545342599 +0000 UTC m=+1173.706614868" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.555638 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"115a0140-2fa7-40d4-aadf-be6181fd2244","Type":"ContainerStarted","Data":"36b7ec621ffffca9221fc1bd4bf7b8887e2cab529016f85ff219a813ebadd967"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.561339 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" event={"ID":"efbf7006-bf18-491a-a3cf-0629ff14e71e","Type":"ContainerDied","Data":"e7c9bdd98fe4699d86213fc965d0798f239ea16c893260e18c96134ffacbba6c"} Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.561445 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5467799-8xtsv" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.562559 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-swift-storage-0\") pod \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563175 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-svc\") pod \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563313 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-config\") pod \"efbf7006-bf18-491a-a3cf-0629ff14e71e\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563362 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-svc\") pod \"efbf7006-bf18-491a-a3cf-0629ff14e71e\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563423 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-config\") pod \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563463 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-sb\") pod \"efbf7006-bf18-491a-a3cf-0629ff14e71e\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563501 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbshh\" (UniqueName: \"kubernetes.io/projected/efbf7006-bf18-491a-a3cf-0629ff14e71e-kube-api-access-lbshh\") pod \"efbf7006-bf18-491a-a3cf-0629ff14e71e\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563523 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7nz8\" (UniqueName: \"kubernetes.io/projected/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-kube-api-access-f7nz8\") pod \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563551 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-swift-storage-0\") pod \"efbf7006-bf18-491a-a3cf-0629ff14e71e\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563601 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-nb\") pod \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563625 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-nb\") pod \"efbf7006-bf18-491a-a3cf-0629ff14e71e\" (UID: \"efbf7006-bf18-491a-a3cf-0629ff14e71e\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.563674 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-sb\") pod \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\" (UID: \"ef89abe9-d7a7-4498-9e71-6103a1ebfb80\") " Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.569551 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-84jft" podStartSLOduration=4.569533605 podStartE2EDuration="4.569533605s" podCreationTimestamp="2026-01-26 16:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:24:58.558017168 +0000 UTC m=+1173.719289427" watchObservedRunningTime="2026-01-26 16:24:58.569533605 +0000 UTC m=+1173.730805874" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.630283 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbf7006-bf18-491a-a3cf-0629ff14e71e-kube-api-access-lbshh" (OuterVolumeSpecName: "kube-api-access-lbshh") pod "efbf7006-bf18-491a-a3cf-0629ff14e71e" (UID: "efbf7006-bf18-491a-a3cf-0629ff14e71e"). InnerVolumeSpecName "kube-api-access-lbshh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.636842 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef89abe9-d7a7-4498-9e71-6103a1ebfb80" (UID: "ef89abe9-d7a7-4498-9e71-6103a1ebfb80"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.654825 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-kube-api-access-f7nz8" (OuterVolumeSpecName: "kube-api-access-f7nz8") pod "ef89abe9-d7a7-4498-9e71-6103a1ebfb80" (UID: "ef89abe9-d7a7-4498-9e71-6103a1ebfb80"). InnerVolumeSpecName "kube-api-access-f7nz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.661285 4680 scope.go:117] "RemoveContainer" containerID="839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f" Jan 26 16:24:58 crc kubenswrapper[4680]: E0126 16:24:58.665635 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f\": container with ID starting with 839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f not found: ID does not exist" containerID="839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.665673 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f"} err="failed to get container status \"839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f\": rpc error: code = NotFound desc = could not find container \"839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f\": container with ID starting with 839038a574b8bc50a0971b42b87e097384c551d78a0a5a989b22b274ec437b6f not found: ID does not exist" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.665694 4680 scope.go:117] "RemoveContainer" containerID="ee4d3639e2735156af539948741f01ab8570ed4979e9a1189b31ed9751cb1490" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.666948 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbshh\" (UniqueName: \"kubernetes.io/projected/efbf7006-bf18-491a-a3cf-0629ff14e71e-kube-api-access-lbshh\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.666967 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7nz8\" (UniqueName: \"kubernetes.io/projected/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-kube-api-access-f7nz8\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.666976 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.718021 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efbf7006-bf18-491a-a3cf-0629ff14e71e" (UID: "efbf7006-bf18-491a-a3cf-0629ff14e71e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.726600 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef89abe9-d7a7-4498-9e71-6103a1ebfb80" (UID: "ef89abe9-d7a7-4498-9e71-6103a1ebfb80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.729580 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef89abe9-d7a7-4498-9e71-6103a1ebfb80" (UID: "ef89abe9-d7a7-4498-9e71-6103a1ebfb80"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.730705 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-config" (OuterVolumeSpecName: "config") pod "ef89abe9-d7a7-4498-9e71-6103a1ebfb80" (UID: "ef89abe9-d7a7-4498-9e71-6103a1ebfb80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.746043 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-config" (OuterVolumeSpecName: "config") pod "efbf7006-bf18-491a-a3cf-0629ff14e71e" (UID: "efbf7006-bf18-491a-a3cf-0629ff14e71e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.760051 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef89abe9-d7a7-4498-9e71-6103a1ebfb80" (UID: "ef89abe9-d7a7-4498-9e71-6103a1ebfb80"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.769393 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.769424 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.769435 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.769445 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.769456 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.769466 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef89abe9-d7a7-4498-9e71-6103a1ebfb80-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.785769 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "efbf7006-bf18-491a-a3cf-0629ff14e71e" (UID: "efbf7006-bf18-491a-a3cf-0629ff14e71e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.799624 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "efbf7006-bf18-491a-a3cf-0629ff14e71e" (UID: "efbf7006-bf18-491a-a3cf-0629ff14e71e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.818427 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "efbf7006-bf18-491a-a3cf-0629ff14e71e" (UID: "efbf7006-bf18-491a-a3cf-0629ff14e71e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.872045 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.872096 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.872108 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbf7006-bf18-491a-a3cf-0629ff14e71e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.967509 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5467799-8xtsv"] Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.985418 4680 scope.go:117] "RemoveContainer" containerID="a8f8d40789ea696aed29f45071f4153287806988e52cb2c21637e72d48ff10d3" Jan 26 16:24:58 crc kubenswrapper[4680]: I0126 16:24:58.988936 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59f5467799-8xtsv"] Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.035099 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6687f9ccf9-fp8r2"] Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.055743 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6687f9ccf9-fp8r2"] Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.227919 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef89abe9-d7a7-4498-9e71-6103a1ebfb80" path="/var/lib/kubelet/pods/ef89abe9-d7a7-4498-9e71-6103a1ebfb80/volumes" Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.228905 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" path="/var/lib/kubelet/pods/efbf7006-bf18-491a-a3cf-0629ff14e71e/volumes" Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.414988 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb88c9957-lvzdd"] Jan 26 16:24:59 crc kubenswrapper[4680]: W0126 16:24:59.462266 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d52a893_b89a_4ee1_b056_78a94a87ac96.slice/crio-0a904f8b7c01fb2ba822a0207f8a907badc87d32cfd7fd006a4a21a8807c1121 WatchSource:0}: Error finding container 0a904f8b7c01fb2ba822a0207f8a907badc87d32cfd7fd006a4a21a8807c1121: Status 404 returned error can't find the container with id 0a904f8b7c01fb2ba822a0207f8a907badc87d32cfd7fd006a4a21a8807c1121 Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.589635 4680 generic.go:334] "Generic (PLEG): container finished" podID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerID="b3efb458e55e5d75976dbce71ffb4c00dfc7dfd9f07ef7521a04e212daf7b569" exitCode=0 Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.590032 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" event={"ID":"a72e4213-72e8-4a99-9863-fe63708b3f22","Type":"ContainerDied","Data":"b3efb458e55e5d75976dbce71ffb4c00dfc7dfd9f07ef7521a04e212daf7b569"} Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.594931 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb88c9957-lvzdd" event={"ID":"8d52a893-b89a-4ee1-b056-78a94a87ac96","Type":"ContainerStarted","Data":"0a904f8b7c01fb2ba822a0207f8a907badc87d32cfd7fd006a4a21a8807c1121"} Jan 26 16:24:59 crc kubenswrapper[4680]: I0126 16:24:59.599786 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"115a0140-2fa7-40d4-aadf-be6181fd2244","Type":"ContainerStarted","Data":"25c0d8b68b2d8052bc3396a1f353ea0ff785657f29b23754a071607a9a863522"} Jan 26 16:25:00 crc kubenswrapper[4680]: I0126 16:25:00.627298 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465791a7-75b4-4168-8fe1-c535ecdd8ed9","Type":"ContainerStarted","Data":"47fea1763786bae0e26253900a691e8b26d1bc6cb1b3d29746fcaf55b5788425"} Jan 26 16:25:00 crc kubenswrapper[4680]: I0126 16:25:00.631830 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" event={"ID":"a72e4213-72e8-4a99-9863-fe63708b3f22","Type":"ContainerStarted","Data":"f7196f4d9625bafd28d8b9f31b82081ecd8cf88e0559f0202f97950806390f14"} Jan 26 16:25:00 crc kubenswrapper[4680]: I0126 16:25:00.632911 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:25:00 crc kubenswrapper[4680]: I0126 16:25:00.671250 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" podStartSLOduration=5.671221371 podStartE2EDuration="5.671221371s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:00.648314101 +0000 UTC m=+1175.809586370" watchObservedRunningTime="2026-01-26 16:25:00.671221371 +0000 UTC m=+1175.832493640" Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.658208 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465791a7-75b4-4168-8fe1-c535ecdd8ed9","Type":"ContainerStarted","Data":"ea592240267940e59912471c8ee1c8da06b0bed35ce8c718947de77b63db8f9e"} Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.658887 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-log" containerID="cri-o://47fea1763786bae0e26253900a691e8b26d1bc6cb1b3d29746fcaf55b5788425" gracePeriod=30 Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.659223 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-httpd" containerID="cri-o://ea592240267940e59912471c8ee1c8da06b0bed35ce8c718947de77b63db8f9e" gracePeriod=30 Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.663033 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-log" containerID="cri-o://25c0d8b68b2d8052bc3396a1f353ea0ff785657f29b23754a071607a9a863522" gracePeriod=30 Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.663294 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"115a0140-2fa7-40d4-aadf-be6181fd2244","Type":"ContainerStarted","Data":"97586466d23bfbff058fe67eb01dc1b5d23b659aff313d33e50886422f75a91d"} Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.663373 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-httpd" containerID="cri-o://97586466d23bfbff058fe67eb01dc1b5d23b659aff313d33e50886422f75a91d" gracePeriod=30 Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.694506 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.69448484 podStartE2EDuration="7.69448484s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:02.684400084 +0000 UTC m=+1177.845672353" watchObservedRunningTime="2026-01-26 16:25:02.69448484 +0000 UTC m=+1177.855757109" Jan 26 16:25:02 crc kubenswrapper[4680]: I0126 16:25:02.727471 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.727450235 podStartE2EDuration="7.727450235s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:02.716364191 +0000 UTC m=+1177.877636470" watchObservedRunningTime="2026-01-26 16:25:02.727450235 +0000 UTC m=+1177.888722504" Jan 26 16:25:03 crc kubenswrapper[4680]: I0126 16:25:03.672239 4680 generic.go:334] "Generic (PLEG): container finished" podID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerID="47fea1763786bae0e26253900a691e8b26d1bc6cb1b3d29746fcaf55b5788425" exitCode=143 Jan 26 16:25:03 crc kubenswrapper[4680]: I0126 16:25:03.672299 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465791a7-75b4-4168-8fe1-c535ecdd8ed9","Type":"ContainerDied","Data":"47fea1763786bae0e26253900a691e8b26d1bc6cb1b3d29746fcaf55b5788425"} Jan 26 16:25:03 crc kubenswrapper[4680]: I0126 16:25:03.675562 4680 generic.go:334] "Generic (PLEG): container finished" podID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerID="25c0d8b68b2d8052bc3396a1f353ea0ff785657f29b23754a071607a9a863522" exitCode=143 Jan 26 16:25:03 crc kubenswrapper[4680]: I0126 16:25:03.675597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"115a0140-2fa7-40d4-aadf-be6181fd2244","Type":"ContainerDied","Data":"25c0d8b68b2d8052bc3396a1f353ea0ff785657f29b23754a071607a9a863522"} Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.677541 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d49dbb4df-cwsvx"] Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.701128 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c44b75754-m2rxl"] Jan 26 16:25:04 crc kubenswrapper[4680]: E0126 16:25:04.701484 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerName="init" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.701500 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerName="init" Jan 26 16:25:04 crc kubenswrapper[4680]: E0126 16:25:04.701528 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef89abe9-d7a7-4498-9e71-6103a1ebfb80" containerName="init" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.701534 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef89abe9-d7a7-4498-9e71-6103a1ebfb80" containerName="init" Jan 26 16:25:04 crc kubenswrapper[4680]: E0126 16:25:04.701545 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerName="dnsmasq-dns" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.701552 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerName="dnsmasq-dns" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.701714 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbf7006-bf18-491a-a3cf-0629ff14e71e" containerName="dnsmasq-dns" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.701757 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef89abe9-d7a7-4498-9e71-6103a1ebfb80" containerName="init" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.702593 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.708513 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.730946 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c44b75754-m2rxl"] Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793584 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-scripts\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793647 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-config-data\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793709 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-secret-key\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793827 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-logs\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793874 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-tls-certs\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793909 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-combined-ca-bundle\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.793934 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txr7t\" (UniqueName: \"kubernetes.io/projected/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-kube-api-access-txr7t\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.878877 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cb88c9957-lvzdd"] Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895195 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-config-data\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895288 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-secret-key\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895391 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-logs\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895419 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-tls-certs\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895448 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-combined-ca-bundle\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895471 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txr7t\" (UniqueName: \"kubernetes.io/projected/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-kube-api-access-txr7t\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.895546 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-scripts\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.896286 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-scripts\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.896990 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-config-data\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.898388 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-logs\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.902151 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8657f7848d-ls2sv"] Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.904129 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.905523 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-secret-key\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.905855 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-tls-certs\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.924037 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-combined-ca-bundle\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.990151 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8657f7848d-ls2sv"] Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.991996 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txr7t\" (UniqueName: \"kubernetes.io/projected/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-kube-api-access-txr7t\") pod \"horizon-c44b75754-m2rxl\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997661 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-combined-ca-bundle\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997815 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhft7\" (UniqueName: \"kubernetes.io/projected/34651440-00a2-4b50-a6cc-a0230d4def92-kube-api-access-fhft7\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997847 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34651440-00a2-4b50-a6cc-a0230d4def92-logs\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997869 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34651440-00a2-4b50-a6cc-a0230d4def92-config-data\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997915 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-horizon-tls-certs\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997942 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-horizon-secret-key\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:04 crc kubenswrapper[4680]: I0126 16:25:04.997997 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34651440-00a2-4b50-a6cc-a0230d4def92-scripts\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34651440-00a2-4b50-a6cc-a0230d4def92-scripts\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099759 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-combined-ca-bundle\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099824 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhft7\" (UniqueName: \"kubernetes.io/projected/34651440-00a2-4b50-a6cc-a0230d4def92-kube-api-access-fhft7\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099859 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34651440-00a2-4b50-a6cc-a0230d4def92-logs\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099894 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34651440-00a2-4b50-a6cc-a0230d4def92-config-data\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099917 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-horizon-tls-certs\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.099953 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-horizon-secret-key\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.100843 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34651440-00a2-4b50-a6cc-a0230d4def92-scripts\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.101298 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34651440-00a2-4b50-a6cc-a0230d4def92-config-data\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.101536 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34651440-00a2-4b50-a6cc-a0230d4def92-logs\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.112796 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.112803 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-horizon-tls-certs\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.117402 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-combined-ca-bundle\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.117518 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/34651440-00a2-4b50-a6cc-a0230d4def92-horizon-secret-key\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.120312 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhft7\" (UniqueName: \"kubernetes.io/projected/34651440-00a2-4b50-a6cc-a0230d4def92-kube-api-access-fhft7\") pod \"horizon-8657f7848d-ls2sv\" (UID: \"34651440-00a2-4b50-a6cc-a0230d4def92\") " pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.342738 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.696982 4680 generic.go:334] "Generic (PLEG): container finished" podID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerID="ea592240267940e59912471c8ee1c8da06b0bed35ce8c718947de77b63db8f9e" exitCode=0 Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.697049 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465791a7-75b4-4168-8fe1-c535ecdd8ed9","Type":"ContainerDied","Data":"ea592240267940e59912471c8ee1c8da06b0bed35ce8c718947de77b63db8f9e"} Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.699406 4680 generic.go:334] "Generic (PLEG): container finished" podID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerID="97586466d23bfbff058fe67eb01dc1b5d23b659aff313d33e50886422f75a91d" exitCode=0 Jan 26 16:25:05 crc kubenswrapper[4680]: I0126 16:25:05.699433 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"115a0140-2fa7-40d4-aadf-be6181fd2244","Type":"ContainerDied","Data":"97586466d23bfbff058fe67eb01dc1b5d23b659aff313d33e50886422f75a91d"} Jan 26 16:25:06 crc kubenswrapper[4680]: I0126 16:25:06.298880 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:25:06 crc kubenswrapper[4680]: I0126 16:25:06.372620 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b94dfffbc-p69gb"] Jan 26 16:25:06 crc kubenswrapper[4680]: I0126 16:25:06.376031 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" containerID="cri-o://5994ea770791abf0561fe1cd5ef5595113ce47fcc3d466d9071cda2e439bfa3d" gracePeriod=10 Jan 26 16:25:07 crc kubenswrapper[4680]: I0126 16:25:07.720911 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerID="5994ea770791abf0561fe1cd5ef5595113ce47fcc3d466d9071cda2e439bfa3d" exitCode=0 Jan 26 16:25:07 crc kubenswrapper[4680]: I0126 16:25:07.720981 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" event={"ID":"d1edca43-0123-4c22-83ae-6de4ef44db36","Type":"ContainerDied","Data":"5994ea770791abf0561fe1cd5ef5595113ce47fcc3d466d9071cda2e439bfa3d"} Jan 26 16:25:09 crc kubenswrapper[4680]: I0126 16:25:09.740266 4680 generic.go:334] "Generic (PLEG): container finished" podID="a4dda9df-c554-4d8f-ad69-782e8847d0b6" containerID="fb6d9f8b82ea74bdd046ee4d42e97faac01d44e12013a3e1eda691efb47209b1" exitCode=0 Jan 26 16:25:09 crc kubenswrapper[4680]: I0126 16:25:09.740607 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jzx79" event={"ID":"a4dda9df-c554-4d8f-ad69-782e8847d0b6","Type":"ContainerDied","Data":"fb6d9f8b82ea74bdd046ee4d42e97faac01d44e12013a3e1eda691efb47209b1"} Jan 26 16:25:10 crc kubenswrapper[4680]: I0126 16:25:10.214963 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: connect: connection refused" Jan 26 16:25:15 crc kubenswrapper[4680]: I0126 16:25:15.215132 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: connect: connection refused" Jan 26 16:25:16 crc kubenswrapper[4680]: I0126 16:25:16.981032 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:25:16 crc kubenswrapper[4680]: I0126 16:25:16.981154 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.454146 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.454493 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.454654 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52jwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-kjtk7_openstack(ab9fd2fb-6b04-4b4b-813b-b7378b617bbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.455840 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-kjtk7" podUID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.463416 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.463470 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.463604 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n588h5dfh5dh64h586hcch66dh78h5f9h5fbh56fh5h55fh595h5cdhc8h5bfh59fh8hfchf9h694hc4hc5h5b6h7fh56fh57bh5d8h688hf9h647q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q26hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f567955f7-8qzq9_openstack(e2fa8451-2e90-43cf-aedc-85cad61a60c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.467686 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb\\\"\"]" pod="openstack/horizon-f567955f7-8qzq9" podUID="e2fa8451-2e90-43cf-aedc-85cad61a60c4" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.556296 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.725214 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-scripts\") pod \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.725293 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-combined-ca-bundle\") pod \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.725314 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-config-data\") pod \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.725376 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6k2z\" (UniqueName: \"kubernetes.io/projected/a4dda9df-c554-4d8f-ad69-782e8847d0b6-kube-api-access-n6k2z\") pod \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.725478 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-credential-keys\") pod \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.725497 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-fernet-keys\") pod \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\" (UID: \"a4dda9df-c554-4d8f-ad69-782e8847d0b6\") " Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.732054 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a4dda9df-c554-4d8f-ad69-782e8847d0b6" (UID: "a4dda9df-c554-4d8f-ad69-782e8847d0b6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.732334 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4dda9df-c554-4d8f-ad69-782e8847d0b6-kube-api-access-n6k2z" (OuterVolumeSpecName: "kube-api-access-n6k2z") pod "a4dda9df-c554-4d8f-ad69-782e8847d0b6" (UID: "a4dda9df-c554-4d8f-ad69-782e8847d0b6"). InnerVolumeSpecName "kube-api-access-n6k2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.733025 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-scripts" (OuterVolumeSpecName: "scripts") pod "a4dda9df-c554-4d8f-ad69-782e8847d0b6" (UID: "a4dda9df-c554-4d8f-ad69-782e8847d0b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.741414 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a4dda9df-c554-4d8f-ad69-782e8847d0b6" (UID: "a4dda9df-c554-4d8f-ad69-782e8847d0b6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.752872 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4dda9df-c554-4d8f-ad69-782e8847d0b6" (UID: "a4dda9df-c554-4d8f-ad69-782e8847d0b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.753440 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-config-data" (OuterVolumeSpecName: "config-data") pod "a4dda9df-c554-4d8f-ad69-782e8847d0b6" (UID: "a4dda9df-c554-4d8f-ad69-782e8847d0b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.827482 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.827518 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.827528 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.827538 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6k2z\" (UniqueName: \"kubernetes.io/projected/a4dda9df-c554-4d8f-ad69-782e8847d0b6-kube-api-access-n6k2z\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.827546 4680 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.827553 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dda9df-c554-4d8f-ad69-782e8847d0b6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.859613 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jzx79" Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.862968 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jzx79" event={"ID":"a4dda9df-c554-4d8f-ad69-782e8847d0b6","Type":"ContainerDied","Data":"ddf920ec57aa7bc24dbf69dc017a5ec3b081b0b823af97ccc002445f581e0a63"} Jan 26 16:25:20 crc kubenswrapper[4680]: I0126 16:25:20.863031 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf920ec57aa7bc24dbf69dc017a5ec3b081b0b823af97ccc002445f581e0a63" Jan 26 16:25:20 crc kubenswrapper[4680]: E0126 16:25:20.864145 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/placement-db-sync-kjtk7" podUID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.763412 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jzx79"] Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.767441 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jzx79"] Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.813032 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-28dpl"] Jan 26 16:25:21 crc kubenswrapper[4680]: E0126 16:25:21.813480 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dda9df-c554-4d8f-ad69-782e8847d0b6" containerName="keystone-bootstrap" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.813497 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dda9df-c554-4d8f-ad69-782e8847d0b6" containerName="keystone-bootstrap" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.813688 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dda9df-c554-4d8f-ad69-782e8847d0b6" containerName="keystone-bootstrap" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.814441 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.816804 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.816851 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.817765 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.819051 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4fzln" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.824250 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.827821 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-28dpl"] Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.949455 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-combined-ca-bundle\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.949524 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-credential-keys\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.949563 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-scripts\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.949580 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-config-data\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.949806 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqcw\" (UniqueName: \"kubernetes.io/projected/83cffb41-1848-473a-9023-204663891964-kube-api-access-gsqcw\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:21 crc kubenswrapper[4680]: I0126 16:25:21.950086 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-fernet-keys\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.051935 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-combined-ca-bundle\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.052015 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-credential-keys\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.052056 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-scripts\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.052096 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-config-data\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.052160 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsqcw\" (UniqueName: \"kubernetes.io/projected/83cffb41-1848-473a-9023-204663891964-kube-api-access-gsqcw\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.052252 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-fernet-keys\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.058085 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-scripts\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.059134 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-fernet-keys\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.059258 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-combined-ca-bundle\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.059264 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-config-data\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.059848 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-credential-keys\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.069984 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsqcw\" (UniqueName: \"kubernetes.io/projected/83cffb41-1848-473a-9023-204663891964-kube-api-access-gsqcw\") pod \"keystone-bootstrap-28dpl\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:22 crc kubenswrapper[4680]: I0126 16:25:22.144986 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:23 crc kubenswrapper[4680]: I0126 16:25:23.179460 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4dda9df-c554-4d8f-ad69-782e8847d0b6" path="/var/lib/kubelet/pods/a4dda9df-c554-4d8f-ad69-782e8847d0b6/volumes" Jan 26 16:25:25 crc kubenswrapper[4680]: I0126 16:25:25.214364 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: i/o timeout" Jan 26 16:25:25 crc kubenswrapper[4680]: I0126 16:25:25.214954 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:25:26 crc kubenswrapper[4680]: I0126 16:25:26.169341 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:25:26 crc kubenswrapper[4680]: I0126 16:25:26.169673 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:25:26 crc kubenswrapper[4680]: I0126 16:25:26.414227 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:26 crc kubenswrapper[4680]: I0126 16:25:26.414302 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:30 crc kubenswrapper[4680]: I0126 16:25:30.215002 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: i/o timeout" Jan 26 16:25:35 crc kubenswrapper[4680]: I0126 16:25:35.215570 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: i/o timeout" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.014588 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.014941 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.015080 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhxtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h9tvh_openstack(71b53f4c-8c15-4f81-b110-3f81b1bd7a5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.019156 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h9tvh" podUID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.122295 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.129498 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.137203 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.154614 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159538 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159604 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-scripts\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159632 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-httpd-run\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159713 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-swift-storage-0\") pod \"d1edca43-0123-4c22-83ae-6de4ef44db36\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159761 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-config\") pod \"d1edca43-0123-4c22-83ae-6de4ef44db36\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159790 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-combined-ca-bundle\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159827 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-sb\") pod \"d1edca43-0123-4c22-83ae-6de4ef44db36\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159854 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-nb\") pod \"d1edca43-0123-4c22-83ae-6de4ef44db36\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159878 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-svc\") pod \"d1edca43-0123-4c22-83ae-6de4ef44db36\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159929 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfq4w\" (UniqueName: \"kubernetes.io/projected/d1edca43-0123-4c22-83ae-6de4ef44db36-kube-api-access-pfq4w\") pod \"d1edca43-0123-4c22-83ae-6de4ef44db36\" (UID: \"d1edca43-0123-4c22-83ae-6de4ef44db36\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.159981 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-config-data\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.160018 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckx7p\" (UniqueName: \"kubernetes.io/projected/465791a7-75b4-4168-8fe1-c535ecdd8ed9-kube-api-access-ckx7p\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.160047 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-internal-tls-certs\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.160227 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-logs\") pod \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\" (UID: \"465791a7-75b4-4168-8fe1-c535ecdd8ed9\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.161385 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.161437 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-logs" (OuterVolumeSpecName: "logs") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.165542 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.191607 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-scripts" (OuterVolumeSpecName: "scripts") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.214485 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465791a7-75b4-4168-8fe1-c535ecdd8ed9-kube-api-access-ckx7p" (OuterVolumeSpecName: "kube-api-access-ckx7p") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "kube-api-access-ckx7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.226442 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.235777 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.249337 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1edca43-0123-4c22-83ae-6de4ef44db36-kube-api-access-pfq4w" (OuterVolumeSpecName: "kube-api-access-pfq4w") pod "d1edca43-0123-4c22-83ae-6de4ef44db36" (UID: "d1edca43-0123-4c22-83ae-6de4ef44db36"). InnerVolumeSpecName "kube-api-access-pfq4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.249353 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261371 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-combined-ca-bundle\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261490 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-public-tls-certs\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261540 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q26hh\" (UniqueName: \"kubernetes.io/projected/e2fa8451-2e90-43cf-aedc-85cad61a60c4-kube-api-access-q26hh\") pod \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261568 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/115a0140-2fa7-40d4-aadf-be6181fd2244-kube-api-access-p7z9s\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261610 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261641 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-config-data\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261692 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-httpd-run\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261803 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-scripts\") pod \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261838 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-config-data\") pod \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261865 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2fa8451-2e90-43cf-aedc-85cad61a60c4-horizon-secret-key\") pod \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261901 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-logs\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262031 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2fa8451-2e90-43cf-aedc-85cad61a60c4-logs\") pod \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\" (UID: \"e2fa8451-2e90-43cf-aedc-85cad61a60c4\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262111 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-scripts\") pod \"115a0140-2fa7-40d4-aadf-be6181fd2244\" (UID: \"115a0140-2fa7-40d4-aadf-be6181fd2244\") " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262711 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262741 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262754 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262766 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465791a7-75b4-4168-8fe1-c535ecdd8ed9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262778 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262793 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfq4w\" (UniqueName: \"kubernetes.io/projected/d1edca43-0123-4c22-83ae-6de4ef44db36-kube-api-access-pfq4w\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.262807 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckx7p\" (UniqueName: \"kubernetes.io/projected/465791a7-75b4-4168-8fe1-c535ecdd8ed9-kube-api-access-ckx7p\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.261589 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1edca43-0123-4c22-83ae-6de4ef44db36" (UID: "d1edca43-0123-4c22-83ae-6de4ef44db36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.267666 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2fa8451-2e90-43cf-aedc-85cad61a60c4-logs" (OuterVolumeSpecName: "logs") pod "e2fa8451-2e90-43cf-aedc-85cad61a60c4" (UID: "e2fa8451-2e90-43cf-aedc-85cad61a60c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.288615 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.289343 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-config-data" (OuterVolumeSpecName: "config-data") pod "e2fa8451-2e90-43cf-aedc-85cad61a60c4" (UID: "e2fa8451-2e90-43cf-aedc-85cad61a60c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.289933 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-scripts" (OuterVolumeSpecName: "scripts") pod "e2fa8451-2e90-43cf-aedc-85cad61a60c4" (UID: "e2fa8451-2e90-43cf-aedc-85cad61a60c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.290489 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.292021 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-logs" (OuterVolumeSpecName: "logs") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.293255 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f567955f7-8qzq9" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.294599 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fa8451-2e90-43cf-aedc-85cad61a60c4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e2fa8451-2e90-43cf-aedc-85cad61a60c4" (UID: "e2fa8451-2e90-43cf-aedc-85cad61a60c4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.298009 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-config" (OuterVolumeSpecName: "config") pod "d1edca43-0123-4c22-83ae-6de4ef44db36" (UID: "d1edca43-0123-4c22-83ae-6de4ef44db36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.301956 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-scripts" (OuterVolumeSpecName: "scripts") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.306002 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2fa8451-2e90-43cf-aedc-85cad61a60c4-kube-api-access-q26hh" (OuterVolumeSpecName: "kube-api-access-q26hh") pod "e2fa8451-2e90-43cf-aedc-85cad61a60c4" (UID: "e2fa8451-2e90-43cf-aedc-85cad61a60c4"). InnerVolumeSpecName "kube-api-access-q26hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.322295 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115a0140-2fa7-40d4-aadf-be6181fd2244-kube-api-access-p7z9s" (OuterVolumeSpecName: "kube-api-access-p7z9s") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "kube-api-access-p7z9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.329803 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.331857 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/barbican-db-sync-h9tvh" podUID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.354406 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465791a7-75b4-4168-8fe1-c535ecdd8ed9","Type":"ContainerDied","Data":"96c0cc2aaabe4c1725f3db62ab23af7d0669f1ee38af0d0c817add91bedbcb9d"} Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.354455 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"115a0140-2fa7-40d4-aadf-be6181fd2244","Type":"ContainerDied","Data":"36b7ec621ffffca9221fc1bd4bf7b8887e2cab529016f85ff219a813ebadd967"} Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.354489 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" event={"ID":"d1edca43-0123-4c22-83ae-6de4ef44db36","Type":"ContainerDied","Data":"ef63af88a5bfccf4c3c2946f7f2a52785be195a28fd52fdf323faa093db6e8ff"} Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.354503 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f567955f7-8qzq9" event={"ID":"e2fa8451-2e90-43cf-aedc-85cad61a60c4","Type":"ContainerDied","Data":"2bb0b648307fe3259ec862a480a324aa949394b9ef6e5d1e69b1ae923b9b80da"} Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.354524 4680 scope.go:117] "RemoveContainer" containerID="ea592240267940e59912471c8ee1c8da06b0bed35ce8c718947de77b63db8f9e" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.354883 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1edca43-0123-4c22-83ae-6de4ef44db36" (UID: "d1edca43-0123-4c22-83ae-6de4ef44db36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.360145 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.367936 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.367977 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2fa8451-2e90-43cf-aedc-85cad61a60c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.367992 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2fa8451-2e90-43cf-aedc-85cad61a60c4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368004 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368014 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2fa8451-2e90-43cf-aedc-85cad61a60c4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368023 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368033 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368043 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368054 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368083 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368099 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q26hh\" (UniqueName: \"kubernetes.io/projected/e2fa8451-2e90-43cf-aedc-85cad61a60c4-kube-api-access-q26hh\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368112 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/115a0140-2fa7-40d4-aadf-be6181fd2244-kube-api-access-p7z9s\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368135 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.368148 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/115a0140-2fa7-40d4-aadf-be6181fd2244-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.372075 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-config-data" (OuterVolumeSpecName: "config-data") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.374707 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "465791a7-75b4-4168-8fe1-c535ecdd8ed9" (UID: "465791a7-75b4-4168-8fe1-c535ecdd8ed9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.376367 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1edca43-0123-4c22-83ae-6de4ef44db36" (UID: "d1edca43-0123-4c22-83ae-6de4ef44db36"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.385381 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.393797 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.401230 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1edca43-0123-4c22-83ae-6de4ef44db36" (UID: "d1edca43-0123-4c22-83ae-6de4ef44db36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.401343 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-config-data" (OuterVolumeSpecName: "config-data") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.409867 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "115a0140-2fa7-40d4-aadf-be6181fd2244" (UID: "115a0140-2fa7-40d4-aadf-be6181fd2244"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469570 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469601 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469611 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1edca43-0123-4c22-83ae-6de4ef44db36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469619 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469628 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469636 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a0140-2fa7-40d4-aadf-be6181fd2244-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469644 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.469652 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465791a7-75b4-4168-8fe1-c535ecdd8ed9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.562009 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.571733 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.645207 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.645864 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-log" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.645880 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-log" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.645898 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-log" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.645904 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-log" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.645924 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-httpd" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.645931 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-httpd" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.645962 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="init" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.645967 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="init" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.645986 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.645992 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" Jan 26 16:25:37 crc kubenswrapper[4680]: E0126 16:25:37.646003 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-httpd" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.646010 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-httpd" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.646444 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-httpd" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.646470 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-httpd" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.646487 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.646499 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" containerName="glance-log" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.646506 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" containerName="glance-log" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.648088 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.679281 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.679432 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.679674 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65zjq" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.679857 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.692820 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.744895 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.762198 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.778249 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b94dfffbc-p69gb"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783038 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783117 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783139 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783174 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783192 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b2xp\" (UniqueName: \"kubernetes.io/projected/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-kube-api-access-9b2xp\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783229 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783266 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.783294 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.794101 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.795400 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.800456 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.810901 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.815045 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b94dfffbc-p69gb"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.851096 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890122 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890195 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890225 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890248 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b2xp\" (UniqueName: \"kubernetes.io/projected/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-kube-api-access-9b2xp\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890286 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890327 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890363 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890387 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-logs\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890413 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890467 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890700 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890728 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxb8\" (UniqueName: \"kubernetes.io/projected/59c7199c-14bd-4851-9059-e677cad6f9c2-kube-api-access-rfxb8\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890749 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890785 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.890813 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.894766 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.894996 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.895325 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.903280 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.920104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.920231 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f567955f7-8qzq9"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.924715 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.925719 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.941742 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b2xp\" (UniqueName: \"kubernetes.io/projected/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-kube-api-access-9b2xp\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.941798 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f567955f7-8qzq9"] Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.949306 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995763 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-logs\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995845 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995880 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995903 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxb8\" (UniqueName: \"kubernetes.io/projected/59c7199c-14bd-4851-9059-e677cad6f9c2-kube-api-access-rfxb8\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995918 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.995984 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.996008 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:37 crc kubenswrapper[4680]: I0126 16:25:37.996507 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.001342 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-logs\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.001446 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.001693 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.006767 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.008547 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.009711 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.021553 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.023818 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxb8\" (UniqueName: \"kubernetes.io/projected/59c7199c-14bd-4851-9059-e677cad6f9c2-kube-api-access-rfxb8\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.057489 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.190530 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.305115 4680 generic.go:334] "Generic (PLEG): container finished" podID="bbd801f9-47d9-4d25-8809-c923b39525bf" containerID="d99b813d199e891b01e73232a4ada37d92016357cd47ca60c98e49cb5f5888a6" exitCode=0 Jan 26 16:25:38 crc kubenswrapper[4680]: I0126 16:25:38.305179 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-84jft" event={"ID":"bbd801f9-47d9-4d25-8809-c923b39525bf","Type":"ContainerDied","Data":"d99b813d199e891b01e73232a4ada37d92016357cd47ca60c98e49cb5f5888a6"} Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.106361 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.106426 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.106576 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlq56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8b6qn_openstack(59df103d-c023-42a1-8e2c-f262d023d232): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.107769 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8b6qn" podUID="59df103d-c023-42a1-8e2c-f262d023d232" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.182597 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115a0140-2fa7-40d4-aadf-be6181fd2244" path="/var/lib/kubelet/pods/115a0140-2fa7-40d4-aadf-be6181fd2244/volumes" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.183615 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465791a7-75b4-4168-8fe1-c535ecdd8ed9" path="/var/lib/kubelet/pods/465791a7-75b4-4168-8fe1-c535ecdd8ed9/volumes" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.184312 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" path="/var/lib/kubelet/pods/d1edca43-0123-4c22-83ae-6de4ef44db36/volumes" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.185528 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2fa8451-2e90-43cf-aedc-85cad61a60c4" path="/var/lib/kubelet/pods/e2fa8451-2e90-43cf-aedc-85cad61a60c4/volumes" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.316391 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/cinder-db-sync-8b6qn" podUID="59df103d-c023-42a1-8e2c-f262d023d232" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.549353 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.549634 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.549744 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grrfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-zpnh8_openstack(a78a7e79-9fe8-46b7-a137-2be924f24935): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:25:39 crc kubenswrapper[4680]: E0126 16:25:39.550941 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-zpnh8" podUID="a78a7e79-9fe8-46b7-a137-2be924f24935" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.627019 4680 scope.go:117] "RemoveContainer" containerID="47fea1763786bae0e26253900a691e8b26d1bc6cb1b3d29746fcaf55b5788425" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.766646 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-84jft" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.840872 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-combined-ca-bundle\") pod \"bbd801f9-47d9-4d25-8809-c923b39525bf\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.840993 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvrrm\" (UniqueName: \"kubernetes.io/projected/bbd801f9-47d9-4d25-8809-c923b39525bf-kube-api-access-fvrrm\") pod \"bbd801f9-47d9-4d25-8809-c923b39525bf\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.841518 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-config\") pod \"bbd801f9-47d9-4d25-8809-c923b39525bf\" (UID: \"bbd801f9-47d9-4d25-8809-c923b39525bf\") " Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.852371 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd801f9-47d9-4d25-8809-c923b39525bf-kube-api-access-fvrrm" (OuterVolumeSpecName: "kube-api-access-fvrrm") pod "bbd801f9-47d9-4d25-8809-c923b39525bf" (UID: "bbd801f9-47d9-4d25-8809-c923b39525bf"). InnerVolumeSpecName "kube-api-access-fvrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.853585 4680 scope.go:117] "RemoveContainer" containerID="97586466d23bfbff058fe67eb01dc1b5d23b659aff313d33e50886422f75a91d" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.883940 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbd801f9-47d9-4d25-8809-c923b39525bf" (UID: "bbd801f9-47d9-4d25-8809-c923b39525bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.884988 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-config" (OuterVolumeSpecName: "config") pod "bbd801f9-47d9-4d25-8809-c923b39525bf" (UID: "bbd801f9-47d9-4d25-8809-c923b39525bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.935530 4680 scope.go:117] "RemoveContainer" containerID="25c0d8b68b2d8052bc3396a1f353ea0ff785657f29b23754a071607a9a863522" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.944843 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.944871 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvrrm\" (UniqueName: \"kubernetes.io/projected/bbd801f9-47d9-4d25-8809-c923b39525bf-kube-api-access-fvrrm\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.944880 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbd801f9-47d9-4d25-8809-c923b39525bf-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:39 crc kubenswrapper[4680]: I0126 16:25:39.964273 4680 scope.go:117] "RemoveContainer" containerID="5994ea770791abf0561fe1cd5ef5595113ce47fcc3d466d9071cda2e439bfa3d" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.006924 4680 scope.go:117] "RemoveContainer" containerID="859528e529266e8b67616816014409631538109d4673e3d6647a949fa33a7c3a" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.093938 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c44b75754-m2rxl"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.231217 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b94dfffbc-p69gb" podUID="d1edca43-0123-4c22-83ae-6de4ef44db36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: i/o timeout" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.270515 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8657f7848d-ls2sv"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.386515 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-28dpl"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.388344 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerStarted","Data":"dcd665d0e06a07e1b3902fc171bda3018320e0c0106197069d3ade78a5ccadb8"} Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.404568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kjtk7" event={"ID":"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf","Type":"ContainerStarted","Data":"014b318a143d7888b9d21332c0dddbf11362a51b6a731fec6e7f9a0eb1040350"} Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.432809 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kjtk7" podStartSLOduration=3.268356219 podStartE2EDuration="45.43278866s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="2026-01-26 16:24:57.467751158 +0000 UTC m=+1172.629023427" lastFinishedPulling="2026-01-26 16:25:39.632183599 +0000 UTC m=+1214.793455868" observedRunningTime="2026-01-26 16:25:40.426061759 +0000 UTC m=+1215.587334028" watchObservedRunningTime="2026-01-26 16:25:40.43278866 +0000 UTC m=+1215.594060929" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.454589 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49dbb4df-cwsvx" event={"ID":"2b3c8f55-5256-479d-a7e8-3b42ec63414c","Type":"ContainerStarted","Data":"21015a8047ea8bf6294f60025d9d3d1c4cc9be27349b0bfe16b02899df7c8b5b"} Jan 26 16:25:40 crc kubenswrapper[4680]: W0126 16:25:40.461247 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83cffb41_1848_473a_9023_204663891964.slice/crio-df2b0738f53772bd70ff6472672afa4a89825ff06afb573f338501addde5b619 WatchSource:0}: Error finding container df2b0738f53772bd70ff6472672afa4a89825ff06afb573f338501addde5b619: Status 404 returned error can't find the container with id df2b0738f53772bd70ff6472672afa4a89825ff06afb573f338501addde5b619 Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.471541 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb88c9957-lvzdd" event={"ID":"8d52a893-b89a-4ee1-b056-78a94a87ac96","Type":"ContainerStarted","Data":"1f6251c8b0b9281f60883606b81c2a909e7dfffa64bdf80ddaa7c769afcb559a"} Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.477385 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.485692 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerStarted","Data":"6e0095960c7b7264edce513bc3662f3e3cc0d701d556d48d02acd0c35149cd19"} Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.488166 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-84jft" event={"ID":"bbd801f9-47d9-4d25-8809-c923b39525bf","Type":"ContainerDied","Data":"0f732837c7ccc143d0591247043ab5a635fa78b85aa2fb48bc76dbad27e78a74"} Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.488192 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f732837c7ccc143d0591247043ab5a635fa78b85aa2fb48bc76dbad27e78a74" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.488266 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-84jft" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.542218 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerStarted","Data":"eb4025e40af69801a3d785e222c9c0b7304ec62ad3b85f2d0c6fb67467eed00c"} Jan 26 16:25:40 crc kubenswrapper[4680]: E0126 16:25:40.544802 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/heat-db-sync-zpnh8" podUID="a78a7e79-9fe8-46b7-a137-2be924f24935" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.568241 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cbcb956b7-s7klz"] Jan 26 16:25:40 crc kubenswrapper[4680]: E0126 16:25:40.568625 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd801f9-47d9-4d25-8809-c923b39525bf" containerName="neutron-db-sync" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.568637 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd801f9-47d9-4d25-8809-c923b39525bf" containerName="neutron-db-sync" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.568876 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd801f9-47d9-4d25-8809-c923b39525bf" containerName="neutron-db-sync" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.575956 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.576053 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.614585 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cbcb956b7-s7klz"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.659034 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54c5db589d-4gv27"] Jan 26 16:25:40 crc kubenswrapper[4680]: W0126 16:25:40.662263 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59c7199c_14bd_4851_9059_e677cad6f9c2.slice/crio-297d5a8e98c80f29bc6f1fc528f93918b5c98fdf20aa43c87e9f60343d65af22 WatchSource:0}: Error finding container 297d5a8e98c80f29bc6f1fc528f93918b5c98fdf20aa43c87e9f60343d65af22: Status 404 returned error can't find the container with id 297d5a8e98c80f29bc6f1fc528f93918b5c98fdf20aa43c87e9f60343d65af22 Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.669233 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.676299 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.676545 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-k4jj2" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.676721 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678004 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-config\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678102 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-sb\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678128 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-svc\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678163 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-swift-storage-0\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678188 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-nb\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678212 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntltf\" (UniqueName: \"kubernetes.io/projected/e7f2efbe-2395-47cf-81d7-990164716cda-kube-api-access-ntltf\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.678358 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.688520 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54c5db589d-4gv27"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.780843 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-ovndb-tls-certs\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.780929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-config\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.780960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25867\" (UniqueName: \"kubernetes.io/projected/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-kube-api-access-25867\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.780988 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-sb\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781014 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-svc\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781053 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-combined-ca-bundle\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781087 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-swift-storage-0\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781115 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-httpd-config\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781133 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-nb\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781163 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntltf\" (UniqueName: \"kubernetes.io/projected/e7f2efbe-2395-47cf-81d7-990164716cda-kube-api-access-ntltf\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.781196 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-config\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.782228 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-config\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.783811 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-swift-storage-0\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.784464 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-nb\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.784791 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-sb\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.786700 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-svc\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.822261 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.846733 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntltf\" (UniqueName: \"kubernetes.io/projected/e7f2efbe-2395-47cf-81d7-990164716cda-kube-api-access-ntltf\") pod \"dnsmasq-dns-cbcb956b7-s7klz\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.885031 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-config\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.885709 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25867\" (UniqueName: \"kubernetes.io/projected/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-kube-api-access-25867\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.885785 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-combined-ca-bundle\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.885812 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-httpd-config\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.885866 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-ovndb-tls-certs\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.891570 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.898608 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-ovndb-tls-certs\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.899816 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-config\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.902483 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-combined-ca-bundle\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.906523 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-httpd-config\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:40 crc kubenswrapper[4680]: I0126 16:25:40.916956 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25867\" (UniqueName: \"kubernetes.io/projected/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-kube-api-access-25867\") pod \"neutron-54c5db589d-4gv27\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.031516 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.578477 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49dbb4df-cwsvx" event={"ID":"2b3c8f55-5256-479d-a7e8-3b42ec63414c","Type":"ContainerStarted","Data":"386a6336d98e65a63abb0e4cac266c39c1cd73f7f10f254b1f4bea97352b1bd2"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.579053 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5d49dbb4df-cwsvx" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon-log" containerID="cri-o://21015a8047ea8bf6294f60025d9d3d1c4cc9be27349b0bfe16b02899df7c8b5b" gracePeriod=30 Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.579753 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5d49dbb4df-cwsvx" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon" containerID="cri-o://386a6336d98e65a63abb0e4cac266c39c1cd73f7f10f254b1f4bea97352b1bd2" gracePeriod=30 Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.601802 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e","Type":"ContainerStarted","Data":"ec6181946ca041de069d11f8b19fafc14b9be37191cb040ec1caa89d8cabdcef"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.609317 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59c7199c-14bd-4851-9059-e677cad6f9c2","Type":"ContainerStarted","Data":"297d5a8e98c80f29bc6f1fc528f93918b5c98fdf20aa43c87e9f60343d65af22"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.612329 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb88c9957-lvzdd" event={"ID":"8d52a893-b89a-4ee1-b056-78a94a87ac96","Type":"ContainerStarted","Data":"956a53f4c3ee7ee43bab040c19e8cf84f05eb0173f399c8a471abb7243c215d8"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.612466 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cb88c9957-lvzdd" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon-log" containerID="cri-o://1f6251c8b0b9281f60883606b81c2a909e7dfffa64bdf80ddaa7c769afcb559a" gracePeriod=30 Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.612865 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cb88c9957-lvzdd" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon" containerID="cri-o://956a53f4c3ee7ee43bab040c19e8cf84f05eb0173f399c8a471abb7243c215d8" gracePeriod=30 Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.616096 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5d49dbb4df-cwsvx" podStartSLOduration=7.054501619 podStartE2EDuration="46.616085811s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="2026-01-26 16:24:57.468553851 +0000 UTC m=+1172.629826120" lastFinishedPulling="2026-01-26 16:25:37.030138043 +0000 UTC m=+1212.191410312" observedRunningTime="2026-01-26 16:25:41.605585583 +0000 UTC m=+1216.766857852" watchObservedRunningTime="2026-01-26 16:25:41.616085811 +0000 UTC m=+1216.777358080" Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.646282 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cb88c9957-lvzdd" podStartSLOduration=3.484553882 podStartE2EDuration="43.646261207s" podCreationTimestamp="2026-01-26 16:24:58 +0000 UTC" firstStartedPulling="2026-01-26 16:24:59.465591925 +0000 UTC m=+1174.626864194" lastFinishedPulling="2026-01-26 16:25:39.62729925 +0000 UTC m=+1214.788571519" observedRunningTime="2026-01-26 16:25:41.637517199 +0000 UTC m=+1216.798789468" watchObservedRunningTime="2026-01-26 16:25:41.646261207 +0000 UTC m=+1216.807533476" Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.650532 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28dpl" event={"ID":"83cffb41-1848-473a-9023-204663891964","Type":"ContainerStarted","Data":"3cee9884da32ee85d09929d42240193f8db967149b332005cdb387077dc45c5f"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.650572 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28dpl" event={"ID":"83cffb41-1848-473a-9023-204663891964","Type":"ContainerStarted","Data":"df2b0738f53772bd70ff6472672afa4a89825ff06afb573f338501addde5b619"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.656798 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerStarted","Data":"6c572d5c665c3d2eb553f17cf6e76a99a3c23b8972469d1331f143a83b8bf254"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.659659 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerStarted","Data":"e44b8f7ab5af8dcb14020deb2b3858f32b1131d300731cbe6bfd1550211a5525"} Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.671823 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-28dpl" podStartSLOduration=20.671807362 podStartE2EDuration="20.671807362s" podCreationTimestamp="2026-01-26 16:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:41.670530336 +0000 UTC m=+1216.831802605" watchObservedRunningTime="2026-01-26 16:25:41.671807362 +0000 UTC m=+1216.833079631" Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.705600 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c44b75754-m2rxl" podStartSLOduration=37.70557904 podStartE2EDuration="37.70557904s" podCreationTimestamp="2026-01-26 16:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:41.698966333 +0000 UTC m=+1216.860238602" watchObservedRunningTime="2026-01-26 16:25:41.70557904 +0000 UTC m=+1216.866851309" Jan 26 16:25:41 crc kubenswrapper[4680]: I0126 16:25:41.937710 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cbcb956b7-s7klz"] Jan 26 16:25:41 crc kubenswrapper[4680]: W0126 16:25:41.962027 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f2efbe_2395_47cf_81d7_990164716cda.slice/crio-82c7fedd77ee5c6ed687e179fd6c2f7be4d65c62fa655028a1e37fb85d5b0695 WatchSource:0}: Error finding container 82c7fedd77ee5c6ed687e179fd6c2f7be4d65c62fa655028a1e37fb85d5b0695: Status 404 returned error can't find the container with id 82c7fedd77ee5c6ed687e179fd6c2f7be4d65c62fa655028a1e37fb85d5b0695 Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.226872 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54c5db589d-4gv27"] Jan 26 16:25:42 crc kubenswrapper[4680]: W0126 16:25:42.261126 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec969a4_689e_4e82_aedc_9bed8ebe99b2.slice/crio-431345e6cc451e6eb65b1eae8b4f59ab1e44261d16ba6346b7f7e2f605036693 WatchSource:0}: Error finding container 431345e6cc451e6eb65b1eae8b4f59ab1e44261d16ba6346b7f7e2f605036693: Status 404 returned error can't find the container with id 431345e6cc451e6eb65b1eae8b4f59ab1e44261d16ba6346b7f7e2f605036693 Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.680959 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54c5db589d-4gv27" event={"ID":"7ec969a4-689e-4e82-aedc-9bed8ebe99b2","Type":"ContainerStarted","Data":"431345e6cc451e6eb65b1eae8b4f59ab1e44261d16ba6346b7f7e2f605036693"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.685423 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerStarted","Data":"e75f034b772315c38ada5902c9682b54464ec4bd0d4a023917a6ced3a1564c93"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.688904 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerStarted","Data":"f7c9019de00f5906ef764fd80fe6b9342299dd73c58ad71076ff33557704fd7c"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.690546 4680 generic.go:334] "Generic (PLEG): container finished" podID="e7f2efbe-2395-47cf-81d7-990164716cda" containerID="5eebfee23e52d1f5649aacf8a93e7fbc7254105b6d312d7a14c038d3600ef3a2" exitCode=0 Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.690610 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" event={"ID":"e7f2efbe-2395-47cf-81d7-990164716cda","Type":"ContainerDied","Data":"5eebfee23e52d1f5649aacf8a93e7fbc7254105b6d312d7a14c038d3600ef3a2"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.690636 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" event={"ID":"e7f2efbe-2395-47cf-81d7-990164716cda","Type":"ContainerStarted","Data":"82c7fedd77ee5c6ed687e179fd6c2f7be4d65c62fa655028a1e37fb85d5b0695"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.693863 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e","Type":"ContainerStarted","Data":"e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.695597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59c7199c-14bd-4851-9059-e677cad6f9c2","Type":"ContainerStarted","Data":"909532695ee2e23844ea8d0405787851f2c4418a489c5b9b6e5d76cb7c64c93b"} Jan 26 16:25:42 crc kubenswrapper[4680]: I0126 16:25:42.711981 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8657f7848d-ls2sv" podStartSLOduration=38.711963122 podStartE2EDuration="38.711963122s" podCreationTimestamp="2026-01-26 16:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:42.706458145 +0000 UTC m=+1217.867730434" watchObservedRunningTime="2026-01-26 16:25:42.711963122 +0000 UTC m=+1217.873235391" Jan 26 16:25:43 crc kubenswrapper[4680]: I0126 16:25:43.706270 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54c5db589d-4gv27" event={"ID":"7ec969a4-689e-4e82-aedc-9bed8ebe99b2","Type":"ContainerStarted","Data":"9fd845542814645ee71ea840b72cdba1eecdcb406eb3a5368b3daf6dcd17d343"} Jan 26 16:25:43 crc kubenswrapper[4680]: I0126 16:25:43.707936 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59c7199c-14bd-4851-9059-e677cad6f9c2","Type":"ContainerStarted","Data":"4c5df3f4d582331e88910e42cd22b166ac2e56fad3c6fb3540e82aa5d641ed98"} Jan 26 16:25:43 crc kubenswrapper[4680]: I0126 16:25:43.743462 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.743442945 podStartE2EDuration="6.743442945s" podCreationTimestamp="2026-01-26 16:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:43.740985895 +0000 UTC m=+1218.902258174" watchObservedRunningTime="2026-01-26 16:25:43.743442945 +0000 UTC m=+1218.904715204" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.409984 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b859b45b5-vjk86"] Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.411749 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.415287 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.415453 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.445238 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b859b45b5-vjk86"] Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483126 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-httpd-config\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483180 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-public-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483235 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-config\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483250 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-ovndb-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483268 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-internal-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483301 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-combined-ca-bundle\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.483323 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g72v8\" (UniqueName: \"kubernetes.io/projected/3d56492b-ec98-47ad-ab19-e6fd24218b91-kube-api-access-g72v8\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585138 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-combined-ca-bundle\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585186 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g72v8\" (UniqueName: \"kubernetes.io/projected/3d56492b-ec98-47ad-ab19-e6fd24218b91-kube-api-access-g72v8\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585267 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-httpd-config\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585306 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-public-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-config\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585380 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-ovndb-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.585397 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-internal-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.594215 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-httpd-config\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.599366 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-internal-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.599542 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-combined-ca-bundle\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.602043 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-public-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.605135 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g72v8\" (UniqueName: \"kubernetes.io/projected/3d56492b-ec98-47ad-ab19-e6fd24218b91-kube-api-access-g72v8\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.605911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-ovndb-tls-certs\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.612044 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-config\") pod \"neutron-5b859b45b5-vjk86\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.749321 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerStarted","Data":"006f59522ab7a6194fb16f1f4cae39e35cca6dfd2e6889135a34269be78cca8c"} Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.751518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54c5db589d-4gv27" event={"ID":"7ec969a4-689e-4e82-aedc-9bed8ebe99b2","Type":"ContainerStarted","Data":"d12da4a05e1c88c8a27b4602e3fe446a4b1dd425a8c6634e561741b44153fba9"} Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.751790 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.771615 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" event={"ID":"e7f2efbe-2395-47cf-81d7-990164716cda","Type":"ContainerStarted","Data":"72e40a33377b634e69157219982d6f7a95684cd7b3f6b05fa8c0fd67d0fd72bf"} Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.773608 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54c5db589d-4gv27" podStartSLOduration=4.773596149 podStartE2EDuration="4.773596149s" podCreationTimestamp="2026-01-26 16:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:44.768473964 +0000 UTC m=+1219.929746233" watchObservedRunningTime="2026-01-26 16:25:44.773596149 +0000 UTC m=+1219.934868418" Jan 26 16:25:44 crc kubenswrapper[4680]: I0126 16:25:44.776527 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.114165 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.114475 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.205241 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" podStartSLOduration=5.205219029 podStartE2EDuration="5.205219029s" podCreationTimestamp="2026-01-26 16:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:44.797618781 +0000 UTC m=+1219.958891050" watchObservedRunningTime="2026-01-26 16:25:45.205219029 +0000 UTC m=+1220.366491308" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.343595 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.344692 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.438612 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b859b45b5-vjk86"] Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.780409 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e","Type":"ContainerStarted","Data":"2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e"} Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.783671 4680 generic.go:334] "Generic (PLEG): container finished" podID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" containerID="014b318a143d7888b9d21332c0dddbf11362a51b6a731fec6e7f9a0eb1040350" exitCode=0 Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.783738 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kjtk7" event={"ID":"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf","Type":"ContainerDied","Data":"014b318a143d7888b9d21332c0dddbf11362a51b6a731fec6e7f9a0eb1040350"} Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.786266 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b859b45b5-vjk86" event={"ID":"3d56492b-ec98-47ad-ab19-e6fd24218b91","Type":"ContainerStarted","Data":"51862ec54444018793f271e8eded522b3515b8e2ba7d5473641011045e1c0e5a"} Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.786658 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.813814 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.81379677 podStartE2EDuration="8.81379677s" podCreationTimestamp="2026-01-26 16:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:45.806097212 +0000 UTC m=+1220.967369481" watchObservedRunningTime="2026-01-26 16:25:45.81379677 +0000 UTC m=+1220.975069039" Jan 26 16:25:45 crc kubenswrapper[4680]: I0126 16:25:45.915468 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:25:46 crc kubenswrapper[4680]: I0126 16:25:46.795848 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b859b45b5-vjk86" event={"ID":"3d56492b-ec98-47ad-ab19-e6fd24218b91","Type":"ContainerStarted","Data":"5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508"} Jan 26 16:25:46 crc kubenswrapper[4680]: I0126 16:25:46.796266 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b859b45b5-vjk86" event={"ID":"3d56492b-ec98-47ad-ab19-e6fd24218b91","Type":"ContainerStarted","Data":"faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd"} Jan 26 16:25:46 crc kubenswrapper[4680]: I0126 16:25:46.796755 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:25:46 crc kubenswrapper[4680]: I0126 16:25:46.824686 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b859b45b5-vjk86" podStartSLOduration=2.824665169 podStartE2EDuration="2.824665169s" podCreationTimestamp="2026-01-26 16:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:46.823699681 +0000 UTC m=+1221.984971950" watchObservedRunningTime="2026-01-26 16:25:46.824665169 +0000 UTC m=+1221.985937438" Jan 26 16:25:46 crc kubenswrapper[4680]: I0126 16:25:46.984238 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:25:46 crc kubenswrapper[4680]: I0126 16:25:46.984298 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.285171 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kjtk7" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.465461 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-scripts\") pod \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.465637 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-config-data\") pod \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.465657 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52jwt\" (UniqueName: \"kubernetes.io/projected/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-kube-api-access-52jwt\") pod \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.465702 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-logs\") pod \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.465760 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-combined-ca-bundle\") pod \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\" (UID: \"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf\") " Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.469035 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-logs" (OuterVolumeSpecName: "logs") pod "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" (UID: "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.509458 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-scripts" (OuterVolumeSpecName: "scripts") pod "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" (UID: "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.509493 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-kube-api-access-52jwt" (OuterVolumeSpecName: "kube-api-access-52jwt") pod "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" (UID: "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf"). InnerVolumeSpecName "kube-api-access-52jwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.513811 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-config-data" (OuterVolumeSpecName: "config-data") pod "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" (UID: "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.516477 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" (UID: "ab9fd2fb-6b04-4b4b-813b-b7378b617bbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.567630 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.567663 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.567673 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52jwt\" (UniqueName: \"kubernetes.io/projected/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-kube-api-access-52jwt\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.567684 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.567692 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.809398 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kjtk7" Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.809518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kjtk7" event={"ID":"ab9fd2fb-6b04-4b4b-813b-b7378b617bbf","Type":"ContainerDied","Data":"5e913656fd96c82b00e683ed15081714077c2a46fc5aa67a65938d8803009e24"} Jan 26 16:25:47 crc kubenswrapper[4680]: I0126 16:25:47.809548 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e913656fd96c82b00e683ed15081714077c2a46fc5aa67a65938d8803009e24" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.003616 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bd5fc7bfb-cc2hd"] Jan 26 16:25:48 crc kubenswrapper[4680]: E0126 16:25:48.006370 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" containerName="placement-db-sync" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.007046 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" containerName="placement-db-sync" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.007348 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" containerName="placement-db-sync" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.012841 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.018027 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.018165 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.018789 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jq6r7" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.020640 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.020946 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.022139 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bd5fc7bfb-cc2hd"] Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.022785 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.022889 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.063863 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.105522 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.178435 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-scripts\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.178707 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjdw\" (UniqueName: \"kubernetes.io/projected/556012da-09cc-4426-a904-260931f9ff6b-kube-api-access-bsjdw\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.178878 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-public-tls-certs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.178917 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-combined-ca-bundle\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.178980 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-internal-tls-certs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.179044 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-config-data\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.179153 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/556012da-09cc-4426-a904-260931f9ff6b-logs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.191802 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.191856 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.263023 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281291 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjdw\" (UniqueName: \"kubernetes.io/projected/556012da-09cc-4426-a904-260931f9ff6b-kube-api-access-bsjdw\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281370 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-public-tls-certs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281397 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-combined-ca-bundle\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281427 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-internal-tls-certs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281454 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-config-data\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281532 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/556012da-09cc-4426-a904-260931f9ff6b-logs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.281576 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-scripts\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.282014 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/556012da-09cc-4426-a904-260931f9ff6b-logs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.282960 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.293218 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-internal-tls-certs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.294548 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-public-tls-certs\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.294988 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-config-data\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.297623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-scripts\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.298434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556012da-09cc-4426-a904-260931f9ff6b-combined-ca-bundle\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.314986 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjdw\" (UniqueName: \"kubernetes.io/projected/556012da-09cc-4426-a904-260931f9ff6b-kube-api-access-bsjdw\") pod \"placement-5bd5fc7bfb-cc2hd\" (UID: \"556012da-09cc-4426-a904-260931f9ff6b\") " pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.334036 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.419396 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.820590 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.820968 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.821004 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:48 crc kubenswrapper[4680]: I0126 16:25:48.821016 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:25:49 crc kubenswrapper[4680]: I0126 16:25:49.146841 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bd5fc7bfb-cc2hd"] Jan 26 16:25:50 crc kubenswrapper[4680]: I0126 16:25:50.845572 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:25:50 crc kubenswrapper[4680]: I0126 16:25:50.845898 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:25:50 crc kubenswrapper[4680]: I0126 16:25:50.845576 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:25:50 crc kubenswrapper[4680]: I0126 16:25:50.893263 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:25:50 crc kubenswrapper[4680]: I0126 16:25:50.969798 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-788d7bbc75-7s7n4"] Jan 26 16:25:50 crc kubenswrapper[4680]: I0126 16:25:50.981355 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="dnsmasq-dns" containerID="cri-o://f7196f4d9625bafd28d8b9f31b82081ecd8cf88e0559f0202f97950806390f14" gracePeriod=10 Jan 26 16:25:51 crc kubenswrapper[4680]: I0126 16:25:51.298827 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Jan 26 16:25:51 crc kubenswrapper[4680]: I0126 16:25:51.856272 4680 generic.go:334] "Generic (PLEG): container finished" podID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerID="f7196f4d9625bafd28d8b9f31b82081ecd8cf88e0559f0202f97950806390f14" exitCode=0 Jan 26 16:25:51 crc kubenswrapper[4680]: I0126 16:25:51.856323 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" event={"ID":"a72e4213-72e8-4a99-9863-fe63708b3f22","Type":"ContainerDied","Data":"f7196f4d9625bafd28d8b9f31b82081ecd8cf88e0559f0202f97950806390f14"} Jan 26 16:25:53 crc kubenswrapper[4680]: I0126 16:25:53.906901 4680 generic.go:334] "Generic (PLEG): container finished" podID="83cffb41-1848-473a-9023-204663891964" containerID="3cee9884da32ee85d09929d42240193f8db967149b332005cdb387077dc45c5f" exitCode=0 Jan 26 16:25:53 crc kubenswrapper[4680]: I0126 16:25:53.906974 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28dpl" event={"ID":"83cffb41-1848-473a-9023-204663891964","Type":"ContainerDied","Data":"3cee9884da32ee85d09929d42240193f8db967149b332005cdb387077dc45c5f"} Jan 26 16:25:53 crc kubenswrapper[4680]: I0126 16:25:53.909651 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bd5fc7bfb-cc2hd" event={"ID":"556012da-09cc-4426-a904-260931f9ff6b","Type":"ContainerStarted","Data":"455f9f20d8576f65018f071526e2808ddc411b1fad4f564c66751d5b2370b285"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.046155 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.145338 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgzw8\" (UniqueName: \"kubernetes.io/projected/a72e4213-72e8-4a99-9863-fe63708b3f22-kube-api-access-kgzw8\") pod \"a72e4213-72e8-4a99-9863-fe63708b3f22\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.145403 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-swift-storage-0\") pod \"a72e4213-72e8-4a99-9863-fe63708b3f22\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.145438 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-nb\") pod \"a72e4213-72e8-4a99-9863-fe63708b3f22\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.145485 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-config\") pod \"a72e4213-72e8-4a99-9863-fe63708b3f22\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.145521 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-sb\") pod \"a72e4213-72e8-4a99-9863-fe63708b3f22\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.145580 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-svc\") pod \"a72e4213-72e8-4a99-9863-fe63708b3f22\" (UID: \"a72e4213-72e8-4a99-9863-fe63708b3f22\") " Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.185450 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72e4213-72e8-4a99-9863-fe63708b3f22-kube-api-access-kgzw8" (OuterVolumeSpecName: "kube-api-access-kgzw8") pod "a72e4213-72e8-4a99-9863-fe63708b3f22" (UID: "a72e4213-72e8-4a99-9863-fe63708b3f22"). InnerVolumeSpecName "kube-api-access-kgzw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.247755 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgzw8\" (UniqueName: \"kubernetes.io/projected/a72e4213-72e8-4a99-9863-fe63708b3f22-kube-api-access-kgzw8\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.511687 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a72e4213-72e8-4a99-9863-fe63708b3f22" (UID: "a72e4213-72e8-4a99-9863-fe63708b3f22"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.524684 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a72e4213-72e8-4a99-9863-fe63708b3f22" (UID: "a72e4213-72e8-4a99-9863-fe63708b3f22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.529012 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a72e4213-72e8-4a99-9863-fe63708b3f22" (UID: "a72e4213-72e8-4a99-9863-fe63708b3f22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.542516 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-config" (OuterVolumeSpecName: "config") pod "a72e4213-72e8-4a99-9863-fe63708b3f22" (UID: "a72e4213-72e8-4a99-9863-fe63708b3f22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.553999 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.554037 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.554046 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.554056 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.589603 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a72e4213-72e8-4a99-9863-fe63708b3f22" (UID: "a72e4213-72e8-4a99-9863-fe63708b3f22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.655140 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a72e4213-72e8-4a99-9863-fe63708b3f22-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.924448 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h9tvh" event={"ID":"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c","Type":"ContainerStarted","Data":"6e348aa6b9c38d2662a7843f1018b3ef6a29d55fd9c23f5bd317f1bf7472edc8"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.931563 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerStarted","Data":"e7459ff85f6b35a0923958b9cf866d435ab1d881e42ce3620bb4f81c04a7c287"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.934684 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" event={"ID":"a72e4213-72e8-4a99-9863-fe63708b3f22","Type":"ContainerDied","Data":"088bd2cc5078d4e6f8b72a1f1a016c9ad8dc7dc103648ed24e77a42ae60b6247"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.934758 4680 scope.go:117] "RemoveContainer" containerID="f7196f4d9625bafd28d8b9f31b82081ecd8cf88e0559f0202f97950806390f14" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.934760 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788d7bbc75-7s7n4" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.947141 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h9tvh" podStartSLOduration=3.860687799 podStartE2EDuration="59.94712222s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="2026-01-26 16:24:57.467193872 +0000 UTC m=+1172.628466141" lastFinishedPulling="2026-01-26 16:25:53.553628293 +0000 UTC m=+1228.714900562" observedRunningTime="2026-01-26 16:25:54.937519477 +0000 UTC m=+1230.098791746" watchObservedRunningTime="2026-01-26 16:25:54.94712222 +0000 UTC m=+1230.108394489" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.948034 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bd5fc7bfb-cc2hd" event={"ID":"556012da-09cc-4426-a904-260931f9ff6b","Type":"ContainerStarted","Data":"59794844808c1d48e1d078876c67e1b81f9e840ac7c721f1d7322183d8f9661f"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.948165 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bd5fc7bfb-cc2hd" event={"ID":"556012da-09cc-4426-a904-260931f9ff6b","Type":"ContainerStarted","Data":"390e581dec2385dcc803fded388635c6e5279de4bc752adbb518de09bfdee23a"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.949151 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.949217 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.962496 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zpnh8" event={"ID":"a78a7e79-9fe8-46b7-a137-2be924f24935","Type":"ContainerStarted","Data":"2ef62766e4859c29f36428511229e5e71147cdde356b816a061e148eea62b8df"} Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.987837 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5bd5fc7bfb-cc2hd" podStartSLOduration=7.9878157850000004 podStartE2EDuration="7.987815785s" podCreationTimestamp="2026-01-26 16:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:54.980498937 +0000 UTC m=+1230.141771206" watchObservedRunningTime="2026-01-26 16:25:54.987815785 +0000 UTC m=+1230.149088054" Jan 26 16:25:54 crc kubenswrapper[4680]: I0126 16:25:54.997155 4680 scope.go:117] "RemoveContainer" containerID="b3efb458e55e5d75976dbce71ffb4c00dfc7dfd9f07ef7521a04e212daf7b569" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.015034 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zpnh8" podStartSLOduration=3.970241147 podStartE2EDuration="1m1.015006626s" podCreationTimestamp="2026-01-26 16:24:54 +0000 UTC" firstStartedPulling="2026-01-26 16:24:56.505743216 +0000 UTC m=+1171.667015485" lastFinishedPulling="2026-01-26 16:25:53.550508695 +0000 UTC m=+1228.711780964" observedRunningTime="2026-01-26 16:25:55.001337378 +0000 UTC m=+1230.162609647" watchObservedRunningTime="2026-01-26 16:25:55.015006626 +0000 UTC m=+1230.176278905" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.042134 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-788d7bbc75-7s7n4"] Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.048391 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-788d7bbc75-7s7n4"] Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.114946 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.182380 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" path="/var/lib/kubelet/pods/a72e4213-72e8-4a99-9863-fe63708b3f22/volumes" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.358465 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.520142 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.673970 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-config-data\") pod \"83cffb41-1848-473a-9023-204663891964\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.674037 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsqcw\" (UniqueName: \"kubernetes.io/projected/83cffb41-1848-473a-9023-204663891964-kube-api-access-gsqcw\") pod \"83cffb41-1848-473a-9023-204663891964\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.674086 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-credential-keys\") pod \"83cffb41-1848-473a-9023-204663891964\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.674210 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-fernet-keys\") pod \"83cffb41-1848-473a-9023-204663891964\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.674368 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-scripts\") pod \"83cffb41-1848-473a-9023-204663891964\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.674703 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-combined-ca-bundle\") pod \"83cffb41-1848-473a-9023-204663891964\" (UID: \"83cffb41-1848-473a-9023-204663891964\") " Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.682688 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-scripts" (OuterVolumeSpecName: "scripts") pod "83cffb41-1848-473a-9023-204663891964" (UID: "83cffb41-1848-473a-9023-204663891964"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.685243 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "83cffb41-1848-473a-9023-204663891964" (UID: "83cffb41-1848-473a-9023-204663891964"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.698281 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "83cffb41-1848-473a-9023-204663891964" (UID: "83cffb41-1848-473a-9023-204663891964"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.699242 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83cffb41-1848-473a-9023-204663891964-kube-api-access-gsqcw" (OuterVolumeSpecName: "kube-api-access-gsqcw") pod "83cffb41-1848-473a-9023-204663891964" (UID: "83cffb41-1848-473a-9023-204663891964"). InnerVolumeSpecName "kube-api-access-gsqcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.701559 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-config-data" (OuterVolumeSpecName: "config-data") pod "83cffb41-1848-473a-9023-204663891964" (UID: "83cffb41-1848-473a-9023-204663891964"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.730739 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83cffb41-1848-473a-9023-204663891964" (UID: "83cffb41-1848-473a-9023-204663891964"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.777473 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.777509 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.777521 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.777535 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsqcw\" (UniqueName: \"kubernetes.io/projected/83cffb41-1848-473a-9023-204663891964-kube-api-access-gsqcw\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.777547 4680 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.777557 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83cffb41-1848-473a-9023-204663891964-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.980570 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8b6qn" event={"ID":"59df103d-c023-42a1-8e2c-f262d023d232","Type":"ContainerStarted","Data":"51edf639f8cce8a9ff45a4212c79bf986f3a6d9c52e4b273c029d958072ac80f"} Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.988962 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28dpl" Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.993763 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28dpl" event={"ID":"83cffb41-1848-473a-9023-204663891964","Type":"ContainerDied","Data":"df2b0738f53772bd70ff6472672afa4a89825ff06afb573f338501addde5b619"} Jan 26 16:25:55 crc kubenswrapper[4680]: I0126 16:25:55.993802 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2b0738f53772bd70ff6472672afa4a89825ff06afb573f338501addde5b619" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.030801 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8b6qn" podStartSLOduration=4.97069408 podStartE2EDuration="1m1.030781514s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="2026-01-26 16:24:57.467656415 +0000 UTC m=+1172.628928684" lastFinishedPulling="2026-01-26 16:25:53.527743849 +0000 UTC m=+1228.689016118" observedRunningTime="2026-01-26 16:25:56.006472844 +0000 UTC m=+1231.167745113" watchObservedRunningTime="2026-01-26 16:25:56.030781514 +0000 UTC m=+1231.192053773" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.346258 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-84ff475c8b-dxr5b"] Jan 26 16:25:56 crc kubenswrapper[4680]: E0126 16:25:56.346630 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="init" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.346646 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="init" Jan 26 16:25:56 crc kubenswrapper[4680]: E0126 16:25:56.346670 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="dnsmasq-dns" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.346677 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="dnsmasq-dns" Jan 26 16:25:56 crc kubenswrapper[4680]: E0126 16:25:56.346691 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cffb41-1848-473a-9023-204663891964" containerName="keystone-bootstrap" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.346697 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cffb41-1848-473a-9023-204663891964" containerName="keystone-bootstrap" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.346860 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72e4213-72e8-4a99-9863-fe63708b3f22" containerName="dnsmasq-dns" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.346885 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="83cffb41-1848-473a-9023-204663891964" containerName="keystone-bootstrap" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.347459 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.356196 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4fzln" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.358499 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.358701 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.360835 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.361961 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.384013 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84ff475c8b-dxr5b"] Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.385346 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490320 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-public-tls-certs\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490362 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-combined-ca-bundle\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490420 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j264k\" (UniqueName: \"kubernetes.io/projected/87b63492-e20d-47c3-9ead-0b4afd2846ff-kube-api-access-j264k\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-internal-tls-certs\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490483 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-credential-keys\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490524 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-fernet-keys\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490549 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-scripts\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.490578 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-config-data\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.591968 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j264k\" (UniqueName: \"kubernetes.io/projected/87b63492-e20d-47c3-9ead-0b4afd2846ff-kube-api-access-j264k\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592025 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-internal-tls-certs\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592061 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-credential-keys\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592157 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-fernet-keys\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592185 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-scripts\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592215 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-config-data\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592270 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-public-tls-certs\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.592289 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-combined-ca-bundle\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.600029 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-public-tls-certs\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.600474 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-fernet-keys\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.601240 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-config-data\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.606223 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-combined-ca-bundle\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.618256 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-scripts\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.619890 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-credential-keys\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.626770 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b63492-e20d-47c3-9ead-0b4afd2846ff-internal-tls-certs\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.631619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j264k\" (UniqueName: \"kubernetes.io/projected/87b63492-e20d-47c3-9ead-0b4afd2846ff-kube-api-access-j264k\") pod \"keystone-84ff475c8b-dxr5b\" (UID: \"87b63492-e20d-47c3-9ead-0b4afd2846ff\") " pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:56 crc kubenswrapper[4680]: I0126 16:25:56.668890 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.451045 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.451468 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.523679 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.572732 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84ff475c8b-dxr5b"] Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.593732 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.593877 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:25:57 crc kubenswrapper[4680]: I0126 16:25:57.631778 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:25:58 crc kubenswrapper[4680]: I0126 16:25:58.015405 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84ff475c8b-dxr5b" event={"ID":"87b63492-e20d-47c3-9ead-0b4afd2846ff","Type":"ContainerStarted","Data":"6862c93fd1f42612000372713f6ae651c42030c189691908e07b0eae2d85334d"} Jan 26 16:25:59 crc kubenswrapper[4680]: I0126 16:25:59.093533 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84ff475c8b-dxr5b" event={"ID":"87b63492-e20d-47c3-9ead-0b4afd2846ff","Type":"ContainerStarted","Data":"bfbed92731232318005132412020bb48875a6874ee09b5d89a3c6f0208fae1a1"} Jan 26 16:25:59 crc kubenswrapper[4680]: I0126 16:25:59.094090 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:25:59 crc kubenswrapper[4680]: I0126 16:25:59.123501 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-84ff475c8b-dxr5b" podStartSLOduration=3.123479001 podStartE2EDuration="3.123479001s" podCreationTimestamp="2026-01-26 16:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:25:59.116962637 +0000 UTC m=+1234.278234906" watchObservedRunningTime="2026-01-26 16:25:59.123479001 +0000 UTC m=+1234.284751260" Jan 26 16:26:02 crc kubenswrapper[4680]: I0126 16:26:02.136703 4680 generic.go:334] "Generic (PLEG): container finished" podID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" containerID="6e348aa6b9c38d2662a7843f1018b3ef6a29d55fd9c23f5bd317f1bf7472edc8" exitCode=0 Jan 26 16:26:02 crc kubenswrapper[4680]: I0126 16:26:02.136789 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h9tvh" event={"ID":"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c","Type":"ContainerDied","Data":"6e348aa6b9c38d2662a7843f1018b3ef6a29d55fd9c23f5bd317f1bf7472edc8"} Jan 26 16:26:05 crc kubenswrapper[4680]: I0126 16:26:05.113520 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:26:05 crc kubenswrapper[4680]: I0126 16:26:05.343259 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:26:06 crc kubenswrapper[4680]: I0126 16:26:06.189530 4680 generic.go:334] "Generic (PLEG): container finished" podID="a78a7e79-9fe8-46b7-a137-2be924f24935" containerID="2ef62766e4859c29f36428511229e5e71147cdde356b816a061e148eea62b8df" exitCode=0 Jan 26 16:26:06 crc kubenswrapper[4680]: I0126 16:26:06.189625 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zpnh8" event={"ID":"a78a7e79-9fe8-46b7-a137-2be924f24935","Type":"ContainerDied","Data":"2ef62766e4859c29f36428511229e5e71147cdde356b816a061e148eea62b8df"} Jan 26 16:26:07 crc kubenswrapper[4680]: I0126 16:26:07.202716 4680 generic.go:334] "Generic (PLEG): container finished" podID="59df103d-c023-42a1-8e2c-f262d023d232" containerID="51edf639f8cce8a9ff45a4212c79bf986f3a6d9c52e4b273c029d958072ac80f" exitCode=0 Jan 26 16:26:07 crc kubenswrapper[4680]: I0126 16:26:07.202724 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8b6qn" event={"ID":"59df103d-c023-42a1-8e2c-f262d023d232","Type":"ContainerDied","Data":"51edf639f8cce8a9ff45a4212c79bf986f3a6d9c52e4b273c029d958072ac80f"} Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.643915 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zpnh8" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.666353 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.672520 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-config-data\") pod \"a78a7e79-9fe8-46b7-a137-2be924f24935\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.672778 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-combined-ca-bundle\") pod \"a78a7e79-9fe8-46b7-a137-2be924f24935\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.672845 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grrfv\" (UniqueName: \"kubernetes.io/projected/a78a7e79-9fe8-46b7-a137-2be924f24935-kube-api-access-grrfv\") pod \"a78a7e79-9fe8-46b7-a137-2be924f24935\" (UID: \"a78a7e79-9fe8-46b7-a137-2be924f24935\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.676213 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.758212 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78a7e79-9fe8-46b7-a137-2be924f24935-kube-api-access-grrfv" (OuterVolumeSpecName: "kube-api-access-grrfv") pod "a78a7e79-9fe8-46b7-a137-2be924f24935" (UID: "a78a7e79-9fe8-46b7-a137-2be924f24935"). InnerVolumeSpecName "kube-api-access-grrfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.775869 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-config-data\") pod \"59df103d-c023-42a1-8e2c-f262d023d232\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.775931 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59df103d-c023-42a1-8e2c-f262d023d232-etc-machine-id\") pod \"59df103d-c023-42a1-8e2c-f262d023d232\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.775968 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-scripts\") pod \"59df103d-c023-42a1-8e2c-f262d023d232\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.776021 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-db-sync-config-data\") pod \"59df103d-c023-42a1-8e2c-f262d023d232\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.776096 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlq56\" (UniqueName: \"kubernetes.io/projected/59df103d-c023-42a1-8e2c-f262d023d232-kube-api-access-wlq56\") pod \"59df103d-c023-42a1-8e2c-f262d023d232\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.776184 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-combined-ca-bundle\") pod \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.779238 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-combined-ca-bundle\") pod \"59df103d-c023-42a1-8e2c-f262d023d232\" (UID: \"59df103d-c023-42a1-8e2c-f262d023d232\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.779329 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhxtx\" (UniqueName: \"kubernetes.io/projected/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-kube-api-access-jhxtx\") pod \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.779351 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-db-sync-config-data\") pod \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\" (UID: \"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c\") " Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.780009 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grrfv\" (UniqueName: \"kubernetes.io/projected/a78a7e79-9fe8-46b7-a137-2be924f24935-kube-api-access-grrfv\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.783220 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59df103d-c023-42a1-8e2c-f262d023d232-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "59df103d-c023-42a1-8e2c-f262d023d232" (UID: "59df103d-c023-42a1-8e2c-f262d023d232"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.795281 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" (UID: "71b53f4c-8c15-4f81-b110-3f81b1bd7a5c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.809343 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a78a7e79-9fe8-46b7-a137-2be924f24935" (UID: "a78a7e79-9fe8-46b7-a137-2be924f24935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.820644 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59df103d-c023-42a1-8e2c-f262d023d232-kube-api-access-wlq56" (OuterVolumeSpecName: "kube-api-access-wlq56") pod "59df103d-c023-42a1-8e2c-f262d023d232" (UID: "59df103d-c023-42a1-8e2c-f262d023d232"). InnerVolumeSpecName "kube-api-access-wlq56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.822381 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "59df103d-c023-42a1-8e2c-f262d023d232" (UID: "59df103d-c023-42a1-8e2c-f262d023d232"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.824359 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-scripts" (OuterVolumeSpecName: "scripts") pod "59df103d-c023-42a1-8e2c-f262d023d232" (UID: "59df103d-c023-42a1-8e2c-f262d023d232"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.825916 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-config-data" (OuterVolumeSpecName: "config-data") pod "a78a7e79-9fe8-46b7-a137-2be924f24935" (UID: "a78a7e79-9fe8-46b7-a137-2be924f24935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.830198 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-kube-api-access-jhxtx" (OuterVolumeSpecName: "kube-api-access-jhxtx") pod "71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" (UID: "71b53f4c-8c15-4f81-b110-3f81b1bd7a5c"). InnerVolumeSpecName "kube-api-access-jhxtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882691 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882731 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhxtx\" (UniqueName: \"kubernetes.io/projected/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-kube-api-access-jhxtx\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882745 4680 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882759 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78a7e79-9fe8-46b7-a137-2be924f24935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882777 4680 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59df103d-c023-42a1-8e2c-f262d023d232-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882789 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882799 4680 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.882811 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlq56\" (UniqueName: \"kubernetes.io/projected/59df103d-c023-42a1-8e2c-f262d023d232-kube-api-access-wlq56\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.891196 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-config-data" (OuterVolumeSpecName: "config-data") pod "59df103d-c023-42a1-8e2c-f262d023d232" (UID: "59df103d-c023-42a1-8e2c-f262d023d232"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.891609 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59df103d-c023-42a1-8e2c-f262d023d232" (UID: "59df103d-c023-42a1-8e2c-f262d023d232"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.899339 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" (UID: "71b53f4c-8c15-4f81-b110-3f81b1bd7a5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.985058 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.985330 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:09 crc kubenswrapper[4680]: I0126 16:26:09.985339 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df103d-c023-42a1-8e2c-f262d023d232-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.233396 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8b6qn" event={"ID":"59df103d-c023-42a1-8e2c-f262d023d232","Type":"ContainerDied","Data":"c65731b044bced1ef8f3b729977361f09444cae2792c85001f8dc45c42d5dd7c"} Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.233434 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65731b044bced1ef8f3b729977361f09444cae2792c85001f8dc45c42d5dd7c" Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.233490 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8b6qn" Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.251964 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h9tvh" event={"ID":"71b53f4c-8c15-4f81-b110-3f81b1bd7a5c","Type":"ContainerDied","Data":"595a6078c809cda08ccc79bfbf1131a2a15f71e66f61f98300f567e3f0b767f4"} Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.252004 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595a6078c809cda08ccc79bfbf1131a2a15f71e66f61f98300f567e3f0b767f4" Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.252092 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h9tvh" Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.258889 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zpnh8" event={"ID":"a78a7e79-9fe8-46b7-a137-2be924f24935","Type":"ContainerDied","Data":"e0b8dd2c7e5149560390857290eb6af59d7bcab0858808f678342830ccc8bdeb"} Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.258956 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b8dd2c7e5149560390857290eb6af59d7bcab0858808f678342830ccc8bdeb" Jan 26 16:26:10 crc kubenswrapper[4680]: I0126 16:26:10.259011 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zpnh8" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094019 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c9b6cc8f5-jfsbx"] Jan 26 16:26:11 crc kubenswrapper[4680]: E0126 16:26:11.094520 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" containerName="barbican-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094536 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" containerName="barbican-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: E0126 16:26:11.094556 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59df103d-c023-42a1-8e2c-f262d023d232" containerName="cinder-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094562 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="59df103d-c023-42a1-8e2c-f262d023d232" containerName="cinder-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: E0126 16:26:11.094578 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a7e79-9fe8-46b7-a137-2be924f24935" containerName="heat-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094584 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a7e79-9fe8-46b7-a137-2be924f24935" containerName="heat-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094784 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" containerName="barbican-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094813 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="59df103d-c023-42a1-8e2c-f262d023d232" containerName="cinder-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.094828 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78a7e79-9fe8-46b7-a137-2be924f24935" containerName="heat-db-sync" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.095874 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.113004 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.113183 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.113370 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-78kpg" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.113500 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.114500 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.125208 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.125287 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wm89s" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.125331 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.125348 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.134752 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77566b99db-dfsnv"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.136184 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137309 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137354 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-config-data-custom\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137378 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfn8\" (UniqueName: \"kubernetes.io/projected/3ed333f1-0250-4607-aaac-55700afea2b8-kube-api-access-jcfn8\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137407 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137431 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hbx5\" (UniqueName: \"kubernetes.io/projected/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-kube-api-access-5hbx5\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137466 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-config-data\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137485 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137502 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed333f1-0250-4607-aaac-55700afea2b8-logs\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137554 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.137577 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-combined-ca-bundle\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.139190 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.232641 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c9b6cc8f5-jfsbx"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.251477 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/897bb792-a70a-409f-9d9e-d15b0506cb42-logs\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.251692 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-combined-ca-bundle\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.251811 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-config-data\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.251928 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252026 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-config-data-custom\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252137 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfn8\" (UniqueName: \"kubernetes.io/projected/3ed333f1-0250-4607-aaac-55700afea2b8-kube-api-access-jcfn8\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252241 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnlt\" (UniqueName: \"kubernetes.io/projected/897bb792-a70a-409f-9d9e-d15b0506cb42-kube-api-access-dxnlt\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252332 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252418 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hbx5\" (UniqueName: \"kubernetes.io/projected/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-kube-api-access-5hbx5\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252568 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-config-data\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252656 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252721 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252838 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-combined-ca-bundle\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252918 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed333f1-0250-4607-aaac-55700afea2b8-logs\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.252998 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-config-data-custom\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.253112 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.253295 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.269104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed333f1-0250-4607-aaac-55700afea2b8-logs\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.271497 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.274204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.274788 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-combined-ca-bundle\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.283850 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.283966 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-config-data\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.301217 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.302645 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed333f1-0250-4607-aaac-55700afea2b8-config-data-custom\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.320202 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.355147 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnlt\" (UniqueName: \"kubernetes.io/projected/897bb792-a70a-409f-9d9e-d15b0506cb42-kube-api-access-dxnlt\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.355275 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-combined-ca-bundle\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.355304 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-config-data-custom\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.355346 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/897bb792-a70a-409f-9d9e-d15b0506cb42-logs\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.355399 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-config-data\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.363302 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/897bb792-a70a-409f-9d9e-d15b0506cb42-logs\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.383245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-config-data\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.383741 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfn8\" (UniqueName: \"kubernetes.io/projected/3ed333f1-0250-4607-aaac-55700afea2b8-kube-api-access-jcfn8\") pod \"barbican-worker-6c9b6cc8f5-jfsbx\" (UID: \"3ed333f1-0250-4607-aaac-55700afea2b8\") " pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.386121 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-config-data-custom\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.400760 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897bb792-a70a-409f-9d9e-d15b0506cb42-combined-ca-bundle\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.404201 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.406428 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hbx5\" (UniqueName: \"kubernetes.io/projected/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-kube-api-access-5hbx5\") pod \"cinder-scheduler-0\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.419036 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnlt\" (UniqueName: \"kubernetes.io/projected/897bb792-a70a-409f-9d9e-d15b0506cb42-kube-api-access-dxnlt\") pod \"barbican-keystone-listener-77566b99db-dfsnv\" (UID: \"897bb792-a70a-409f-9d9e-d15b0506cb42\") " pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.424474 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ccb7b957c-p2tm5"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.442670 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.443414 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.485665 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.498234 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.509859 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ccb7b957c-p2tm5"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.568202 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77566b99db-dfsnv"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.569475 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54vr7\" (UniqueName: \"kubernetes.io/projected/ef3ed449-bac5-40e6-a123-2df4f834faba-kube-api-access-54vr7\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.569725 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-nb\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.569874 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-sb\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.570031 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-swift-storage-0\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.570639 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-config\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.571974 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-svc\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.616171 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.618006 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.635562 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.643576 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683771 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-swift-storage-0\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data-custom\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683828 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683847 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-scripts\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-config\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683903 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-svc\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683926 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccd86b4-9a56-4f71-bec5-e4f2ea026725-logs\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683958 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vr7\" (UniqueName: \"kubernetes.io/projected/ef3ed449-bac5-40e6-a123-2df4f834faba-kube-api-access-54vr7\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.683985 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.684030 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-nb\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.684067 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bccd86b4-9a56-4f71-bec5-e4f2ea026725-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.684096 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8xp\" (UniqueName: \"kubernetes.io/projected/bccd86b4-9a56-4f71-bec5-e4f2ea026725-kube-api-access-tn8xp\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.684119 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-sb\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.685116 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-swift-storage-0\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.693325 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-nb\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.693464 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-config\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.695044 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-svc\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.696540 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-sb\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.702168 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ccb7b957c-p2tm5"] Jan 26 16:26:11 crc kubenswrapper[4680]: E0126 16:26:11.702797 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-54vr7], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" podUID="ef3ed449-bac5-40e6-a123-2df4f834faba" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.785972 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54vr7\" (UniqueName: \"kubernetes.io/projected/ef3ed449-bac5-40e6-a123-2df4f834faba-kube-api-access-54vr7\") pod \"dnsmasq-dns-6ccb7b957c-p2tm5\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786697 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data-custom\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786734 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786759 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-scripts\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786805 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccd86b4-9a56-4f71-bec5-e4f2ea026725-logs\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786845 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786900 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bccd86b4-9a56-4f71-bec5-e4f2ea026725-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.786918 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn8xp\" (UniqueName: \"kubernetes.io/projected/bccd86b4-9a56-4f71-bec5-e4f2ea026725-kube-api-access-tn8xp\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.787626 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccd86b4-9a56-4f71-bec5-e4f2ea026725-logs\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.788096 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bccd86b4-9a56-4f71-bec5-e4f2ea026725-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.802438 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.807327 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data-custom\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.812502 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-scripts\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.812675 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c5c8f4c67-xps6r"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.814212 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.826183 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.848092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn8xp\" (UniqueName: \"kubernetes.io/projected/bccd86b4-9a56-4f71-bec5-e4f2ea026725-kube-api-access-tn8xp\") pod \"cinder-api-0\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " pod="openstack/cinder-api-0" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.890948 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzpc\" (UniqueName: \"kubernetes.io/projected/39034751-1073-4ef0-b70e-7553c2d9224c-kube-api-access-chzpc\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.891009 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.891060 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-svc\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.891135 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-config\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.891165 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.891235 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.902395 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5c8f4c67-xps6r"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.927556 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-65bcdb7d94-8lznk"] Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.928996 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.933878 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995038 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58k4\" (UniqueName: \"kubernetes.io/projected/01776535-8106-4e24-806b-c150936fbb6f-kube-api-access-p58k4\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995424 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995492 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-combined-ca-bundle\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995512 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01776535-8106-4e24-806b-c150936fbb6f-logs\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995542 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chzpc\" (UniqueName: \"kubernetes.io/projected/39034751-1073-4ef0-b70e-7553c2d9224c-kube-api-access-chzpc\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995566 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995594 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-svc\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995660 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data-custom\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995682 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-config\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.995714 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.997776 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.998255 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.998724 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:11 crc kubenswrapper[4680]: I0126 16:26:11.998859 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-svc\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.003636 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-config\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.003684 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65bcdb7d94-8lznk"] Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.019957 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chzpc\" (UniqueName: \"kubernetes.io/projected/39034751-1073-4ef0-b70e-7553c2d9224c-kube-api-access-chzpc\") pod \"dnsmasq-dns-6c5c8f4c67-xps6r\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.089698 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.100699 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.102219 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data-custom\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.102377 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p58k4\" (UniqueName: \"kubernetes.io/projected/01776535-8106-4e24-806b-c150936fbb6f-kube-api-access-p58k4\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.102620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-combined-ca-bundle\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.102670 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01776535-8106-4e24-806b-c150936fbb6f-logs\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.105104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01776535-8106-4e24-806b-c150936fbb6f-logs\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.124172 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.125062 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data-custom\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.127437 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p58k4\" (UniqueName: \"kubernetes.io/projected/01776535-8106-4e24-806b-c150936fbb6f-kube-api-access-p58k4\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.133535 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-combined-ca-bundle\") pod \"barbican-api-65bcdb7d94-8lznk\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.161727 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.277677 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.411094 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b859b45b5-vjk86"] Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.411451 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b859b45b5-vjk86" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-api" containerID="cri-o://faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd" gracePeriod=30 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.412323 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b859b45b5-vjk86" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-httpd" containerID="cri-o://5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508" gracePeriod=30 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.477306 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9b85fd5c9-zlmxc"] Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.488728 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.506723 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerStarted","Data":"b7cd3827bb25c07fb8f3342d1d61d0b29e5326dad79507debcad2f04c6ea089f"} Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.506899 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-central-agent" containerID="cri-o://eb4025e40af69801a3d785e222c9c0b7304ec62ad3b85f2d0c6fb67467eed00c" gracePeriod=30 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.507181 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.507239 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="proxy-httpd" containerID="cri-o://b7cd3827bb25c07fb8f3342d1d61d0b29e5326dad79507debcad2f04c6ea089f" gracePeriod=30 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.507281 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="sg-core" containerID="cri-o://e7459ff85f6b35a0923958b9cf866d435ab1d881e42ce3620bb4f81c04a7c287" gracePeriod=30 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.507314 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-notification-agent" containerID="cri-o://006f59522ab7a6194fb16f1f4cae39e35cca6dfd2e6889135a34269be78cca8c" gracePeriod=30 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.550259 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerID="956a53f4c3ee7ee43bab040c19e8cf84f05eb0173f399c8a471abb7243c215d8" exitCode=137 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.550281 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerID="1f6251c8b0b9281f60883606b81c2a909e7dfffa64bdf80ddaa7c769afcb559a" exitCode=137 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.550321 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb88c9957-lvzdd" event={"ID":"8d52a893-b89a-4ee1-b056-78a94a87ac96","Type":"ContainerDied","Data":"956a53f4c3ee7ee43bab040c19e8cf84f05eb0173f399c8a471abb7243c215d8"} Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.550346 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb88c9957-lvzdd" event={"ID":"8d52a893-b89a-4ee1-b056-78a94a87ac96","Type":"ContainerDied","Data":"1f6251c8b0b9281f60883606b81c2a909e7dfffa64bdf80ddaa7c769afcb559a"} Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.551751 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b85fd5c9-zlmxc"] Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.570257 4680 generic.go:334] "Generic (PLEG): container finished" podID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerID="386a6336d98e65a63abb0e4cac266c39c1cd73f7f10f254b1f4bea97352b1bd2" exitCode=137 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.570290 4680 generic.go:334] "Generic (PLEG): container finished" podID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerID="21015a8047ea8bf6294f60025d9d3d1c4cc9be27349b0bfe16b02899df7c8b5b" exitCode=137 Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.570354 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.570460 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49dbb4df-cwsvx" event={"ID":"2b3c8f55-5256-479d-a7e8-3b42ec63414c","Type":"ContainerDied","Data":"386a6336d98e65a63abb0e4cac266c39c1cd73f7f10f254b1f4bea97352b1bd2"} Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.570660 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49dbb4df-cwsvx" event={"ID":"2b3c8f55-5256-479d-a7e8-3b42ec63414c","Type":"ContainerDied","Data":"21015a8047ea8bf6294f60025d9d3d1c4cc9be27349b0bfe16b02899df7c8b5b"} Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593582 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-internal-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593631 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-config\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593655 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-public-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593704 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-combined-ca-bundle\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593764 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jk2\" (UniqueName: \"kubernetes.io/projected/4e158887-8689-4b75-a22c-fa6e8033190f-kube-api-access-54jk2\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593813 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-httpd-config\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.593834 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-ovndb-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.603905 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.628800 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697316 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-sb\") pod \"ef3ed449-bac5-40e6-a123-2df4f834faba\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697638 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-svc\") pod \"ef3ed449-bac5-40e6-a123-2df4f834faba\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697726 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54vr7\" (UniqueName: \"kubernetes.io/projected/ef3ed449-bac5-40e6-a123-2df4f834faba-kube-api-access-54vr7\") pod \"ef3ed449-bac5-40e6-a123-2df4f834faba\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697754 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-swift-storage-0\") pod \"ef3ed449-bac5-40e6-a123-2df4f834faba\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697771 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-nb\") pod \"ef3ed449-bac5-40e6-a123-2df4f834faba\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697802 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-config\") pod \"ef3ed449-bac5-40e6-a123-2df4f834faba\" (UID: \"ef3ed449-bac5-40e6-a123-2df4f834faba\") " Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697847 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef3ed449-bac5-40e6-a123-2df4f834faba" (UID: "ef3ed449-bac5-40e6-a123-2df4f834faba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697910 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-httpd-config\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697948 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-ovndb-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.697991 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-internal-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.698010 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-config\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.698035 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-public-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.698107 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-combined-ca-bundle\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.698182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54jk2\" (UniqueName: \"kubernetes.io/projected/4e158887-8689-4b75-a22c-fa6e8033190f-kube-api-access-54jk2\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.698266 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.700036 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef3ed449-bac5-40e6-a123-2df4f834faba" (UID: "ef3ed449-bac5-40e6-a123-2df4f834faba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.701779 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef3ed449-bac5-40e6-a123-2df4f834faba" (UID: "ef3ed449-bac5-40e6-a123-2df4f834faba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.710817 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-combined-ca-bundle\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.713240 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef3ed449-bac5-40e6-a123-2df4f834faba" (UID: "ef3ed449-bac5-40e6-a123-2df4f834faba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.713270 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-config" (OuterVolumeSpecName: "config") pod "ef3ed449-bac5-40e6-a123-2df4f834faba" (UID: "ef3ed449-bac5-40e6-a123-2df4f834faba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.734107 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.54133357 podStartE2EDuration="1m17.734086276s" podCreationTimestamp="2026-01-26 16:24:55 +0000 UTC" firstStartedPulling="2026-01-26 16:24:56.945455195 +0000 UTC m=+1172.106727464" lastFinishedPulling="2026-01-26 16:26:11.138207901 +0000 UTC m=+1246.299480170" observedRunningTime="2026-01-26 16:26:12.71365736 +0000 UTC m=+1247.874929629" watchObservedRunningTime="2026-01-26 16:26:12.734086276 +0000 UTC m=+1247.895358545" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.735777 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-ovndb-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.749147 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-public-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.760879 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-config\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.761140 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3ed449-bac5-40e6-a123-2df4f834faba-kube-api-access-54vr7" (OuterVolumeSpecName: "kube-api-access-54vr7") pod "ef3ed449-bac5-40e6-a123-2df4f834faba" (UID: "ef3ed449-bac5-40e6-a123-2df4f834faba"). InnerVolumeSpecName "kube-api-access-54vr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.762476 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.770950 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-httpd-config\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.780844 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54jk2\" (UniqueName: \"kubernetes.io/projected/4e158887-8689-4b75-a22c-fa6e8033190f-kube-api-access-54jk2\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.790205 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-internal-tls-certs\") pod \"neutron-9b85fd5c9-zlmxc\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.802555 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54vr7\" (UniqueName: \"kubernetes.io/projected/ef3ed449-bac5-40e6-a123-2df4f834faba-kube-api-access-54vr7\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.802598 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.802607 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.802615 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.802625 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef3ed449-bac5-40e6-a123-2df4f834faba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.837268 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 16:26:12 crc kubenswrapper[4680]: I0126 16:26:12.926512 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.027719 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c9b6cc8f5-jfsbx"] Jan 26 16:26:13 crc kubenswrapper[4680]: W0126 16:26:13.087218 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ed333f1_0250_4607_aaac_55700afea2b8.slice/crio-6e77a66b9d4f91c2157f99549b4d07405e566d91f4507645bc0caa8bd4d79a9d WatchSource:0}: Error finding container 6e77a66b9d4f91c2157f99549b4d07405e566d91f4507645bc0caa8bd4d79a9d: Status 404 returned error can't find the container with id 6e77a66b9d4f91c2157f99549b4d07405e566d91f4507645bc0caa8bd4d79a9d Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.191611 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77566b99db-dfsnv"] Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.229935 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.331734 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvr74\" (UniqueName: \"kubernetes.io/projected/8d52a893-b89a-4ee1-b056-78a94a87ac96-kube-api-access-qvr74\") pod \"8d52a893-b89a-4ee1-b056-78a94a87ac96\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.331820 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-config-data\") pod \"8d52a893-b89a-4ee1-b056-78a94a87ac96\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.336218 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d52a893-b89a-4ee1-b056-78a94a87ac96-kube-api-access-qvr74" (OuterVolumeSpecName: "kube-api-access-qvr74") pod "8d52a893-b89a-4ee1-b056-78a94a87ac96" (UID: "8d52a893-b89a-4ee1-b056-78a94a87ac96"). InnerVolumeSpecName "kube-api-access-qvr74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.390722 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-config-data" (OuterVolumeSpecName: "config-data") pod "8d52a893-b89a-4ee1-b056-78a94a87ac96" (UID: "8d52a893-b89a-4ee1-b056-78a94a87ac96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.432964 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d52a893-b89a-4ee1-b056-78a94a87ac96-horizon-secret-key\") pod \"8d52a893-b89a-4ee1-b056-78a94a87ac96\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.433103 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-scripts\") pod \"8d52a893-b89a-4ee1-b056-78a94a87ac96\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.433143 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d52a893-b89a-4ee1-b056-78a94a87ac96-logs\") pod \"8d52a893-b89a-4ee1-b056-78a94a87ac96\" (UID: \"8d52a893-b89a-4ee1-b056-78a94a87ac96\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.433529 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.433541 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvr74\" (UniqueName: \"kubernetes.io/projected/8d52a893-b89a-4ee1-b056-78a94a87ac96-kube-api-access-qvr74\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.433851 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d52a893-b89a-4ee1-b056-78a94a87ac96-logs" (OuterVolumeSpecName: "logs") pod "8d52a893-b89a-4ee1-b056-78a94a87ac96" (UID: "8d52a893-b89a-4ee1-b056-78a94a87ac96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.436050 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d52a893-b89a-4ee1-b056-78a94a87ac96-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d52a893-b89a-4ee1-b056-78a94a87ac96" (UID: "8d52a893-b89a-4ee1-b056-78a94a87ac96"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.497401 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-scripts" (OuterVolumeSpecName: "scripts") pod "8d52a893-b89a-4ee1-b056-78a94a87ac96" (UID: "8d52a893-b89a-4ee1-b056-78a94a87ac96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.537244 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d52a893-b89a-4ee1-b056-78a94a87ac96-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.538156 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d52a893-b89a-4ee1-b056-78a94a87ac96-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.538263 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d52a893-b89a-4ee1-b056-78a94a87ac96-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.541224 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5c8f4c67-xps6r"] Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.578682 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.623107 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.639487 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-scripts\") pod \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.639664 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-config-data\") pod \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.639781 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3c8f55-5256-479d-a7e8-3b42ec63414c-logs\") pod \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.639878 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b3c8f55-5256-479d-a7e8-3b42ec63414c-horizon-secret-key\") pod \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.639961 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77v27\" (UniqueName: \"kubernetes.io/projected/2b3c8f55-5256-479d-a7e8-3b42ec63414c-kube-api-access-77v27\") pod \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\" (UID: \"2b3c8f55-5256-479d-a7e8-3b42ec63414c\") " Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.641032 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3c8f55-5256-479d-a7e8-3b42ec63414c-logs" (OuterVolumeSpecName: "logs") pod "2b3c8f55-5256-479d-a7e8-3b42ec63414c" (UID: "2b3c8f55-5256-479d-a7e8-3b42ec63414c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.653668 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3c8f55-5256-479d-a7e8-3b42ec63414c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2b3c8f55-5256-479d-a7e8-3b42ec63414c" (UID: "2b3c8f55-5256-479d-a7e8-3b42ec63414c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.662289 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3c8f55-5256-479d-a7e8-3b42ec63414c-kube-api-access-77v27" (OuterVolumeSpecName: "kube-api-access-77v27") pod "2b3c8f55-5256-479d-a7e8-3b42ec63414c" (UID: "2b3c8f55-5256-479d-a7e8-3b42ec63414c"). InnerVolumeSpecName "kube-api-access-77v27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.662678 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49dbb4df-cwsvx" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.663620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49dbb4df-cwsvx" event={"ID":"2b3c8f55-5256-479d-a7e8-3b42ec63414c","Type":"ContainerDied","Data":"395d482a73b38497ac9e9f01132d392348c4a222cbbe498337f73518f9ea34c6"} Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.663716 4680 scope.go:117] "RemoveContainer" containerID="386a6336d98e65a63abb0e4cac266c39c1cd73f7f10f254b1f4bea97352b1bd2" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.669996 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3c7903b-3d60-439c-ad1a-b9f0c78101d1","Type":"ContainerStarted","Data":"285a6488c2693a49fec060be54c729d05093b9449072d89bef031b089843ac5e"} Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.742960 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b3c8f55-5256-479d-a7e8-3b42ec63414c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.743036 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77v27\" (UniqueName: \"kubernetes.io/projected/2b3c8f55-5256-479d-a7e8-3b42ec63414c-kube-api-access-77v27\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.743050 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3c8f55-5256-479d-a7e8-3b42ec63414c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.769967 4680 generic.go:334] "Generic (PLEG): container finished" podID="115b3524-df91-4565-9f2f-c345931095f4" containerID="e7459ff85f6b35a0923958b9cf866d435ab1d881e42ce3620bb4f81c04a7c287" exitCode=2 Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.770047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerDied","Data":"e7459ff85f6b35a0923958b9cf866d435ab1d881e42ce3620bb4f81c04a7c287"} Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.772808 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-config-data" (OuterVolumeSpecName: "config-data") pod "2b3c8f55-5256-479d-a7e8-3b42ec63414c" (UID: "2b3c8f55-5256-479d-a7e8-3b42ec63414c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.779695 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-scripts" (OuterVolumeSpecName: "scripts") pod "2b3c8f55-5256-479d-a7e8-3b42ec63414c" (UID: "2b3c8f55-5256-479d-a7e8-3b42ec63414c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.793421 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb88c9957-lvzdd" event={"ID":"8d52a893-b89a-4ee1-b056-78a94a87ac96","Type":"ContainerDied","Data":"0a904f8b7c01fb2ba822a0207f8a907badc87d32cfd7fd006a4a21a8807c1121"} Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.793533 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb88c9957-lvzdd" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.823176 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" event={"ID":"3ed333f1-0250-4607-aaac-55700afea2b8","Type":"ContainerStarted","Data":"6e77a66b9d4f91c2157f99549b4d07405e566d91f4507645bc0caa8bd4d79a9d"} Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.826631 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb7b957c-p2tm5" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.827345 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" event={"ID":"897bb792-a70a-409f-9d9e-d15b0506cb42","Type":"ContainerStarted","Data":"eebaf1727f1fcf2f34d2a4c150190b2ad313d04918286086cad437c14057e127"} Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.850967 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.850998 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3c8f55-5256-479d-a7e8-3b42ec63414c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.854354 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cb88c9957-lvzdd"] Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.861629 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cb88c9957-lvzdd"] Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.924140 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ccb7b957c-p2tm5"] Jan 26 16:26:13 crc kubenswrapper[4680]: I0126 16:26:13.967407 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ccb7b957c-p2tm5"] Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.070208 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65bcdb7d94-8lznk"] Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.112763 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d49dbb4df-cwsvx"] Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.136010 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d49dbb4df-cwsvx"] Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.232314 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b85fd5c9-zlmxc"] Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.258287 4680 scope.go:117] "RemoveContainer" containerID="21015a8047ea8bf6294f60025d9d3d1c4cc9be27349b0bfe16b02899df7c8b5b" Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.371119 4680 scope.go:117] "RemoveContainer" containerID="956a53f4c3ee7ee43bab040c19e8cf84f05eb0173f399c8a471abb7243c215d8" Jan 26 16:26:14 crc kubenswrapper[4680]: E0126 16:26:14.707973 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39034751_1073_4ef0_b70e_7553c2d9224c.slice/crio-conmon-c5682cc71bb3ce08b11f8c074717c4608c6f3dce8a51e71b04077d53406a00da.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.776718 4680 scope.go:117] "RemoveContainer" containerID="1f6251c8b0b9281f60883606b81c2a909e7dfffa64bdf80ddaa7c769afcb559a" Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.778044 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5b859b45b5-vjk86" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.859921 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bccd86b4-9a56-4f71-bec5-e4f2ea026725","Type":"ContainerStarted","Data":"1c50fa0390484dfcd01d740511ff768efece57361e28f7f0a50debfedd41f29e"} Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.895519 4680 generic.go:334] "Generic (PLEG): container finished" podID="115b3524-df91-4565-9f2f-c345931095f4" containerID="eb4025e40af69801a3d785e222c9c0b7304ec62ad3b85f2d0c6fb67467eed00c" exitCode=0 Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.895637 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerDied","Data":"eb4025e40af69801a3d785e222c9c0b7304ec62ad3b85f2d0c6fb67467eed00c"} Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.936999 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b85fd5c9-zlmxc" event={"ID":"4e158887-8689-4b75-a22c-fa6e8033190f","Type":"ContainerStarted","Data":"4a27b2443e2af3bb3491267b2f835e0edf65adecf2d28a887650abc5ec680bf9"} Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.978555 4680 generic.go:334] "Generic (PLEG): container finished" podID="39034751-1073-4ef0-b70e-7553c2d9224c" containerID="c5682cc71bb3ce08b11f8c074717c4608c6f3dce8a51e71b04077d53406a00da" exitCode=0 Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.978656 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" event={"ID":"39034751-1073-4ef0-b70e-7553c2d9224c","Type":"ContainerDied","Data":"c5682cc71bb3ce08b11f8c074717c4608c6f3dce8a51e71b04077d53406a00da"} Jan 26 16:26:14 crc kubenswrapper[4680]: I0126 16:26:14.978681 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" event={"ID":"39034751-1073-4ef0-b70e-7553c2d9224c","Type":"ContainerStarted","Data":"8d398699e14facd03dff30bb5672f13b896bf7ff4b92dff70412970f0d1160e6"} Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.026014 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65bcdb7d94-8lznk" event={"ID":"01776535-8106-4e24-806b-c150936fbb6f","Type":"ContainerStarted","Data":"bea85aa4348e339511ebcf1c5398616985d8df2b7810b5332ddb512cdc828e2d"} Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.061942 4680 generic.go:334] "Generic (PLEG): container finished" podID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerID="5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508" exitCode=0 Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.062024 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b859b45b5-vjk86" event={"ID":"3d56492b-ec98-47ad-ab19-e6fd24218b91","Type":"ContainerDied","Data":"5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508"} Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.091208 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.120702 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.120795 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.122376 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e75f034b772315c38ada5902c9682b54464ec4bd0d4a023917a6ced3a1564c93"} pod="openstack/horizon-c44b75754-m2rxl" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.122420 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" containerID="cri-o://e75f034b772315c38ada5902c9682b54464ec4bd0d4a023917a6ced3a1564c93" gracePeriod=30 Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.200859 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" path="/var/lib/kubelet/pods/2b3c8f55-5256-479d-a7e8-3b42ec63414c/volumes" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.201778 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" path="/var/lib/kubelet/pods/8d52a893-b89a-4ee1-b056-78a94a87ac96/volumes" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.202425 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3ed449-bac5-40e6-a123-2df4f834faba" path="/var/lib/kubelet/pods/ef3ed449-bac5-40e6-a123-2df4f834faba/volumes" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.362334 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.362767 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.363681 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f7c9019de00f5906ef764fd80fe6b9342299dd73c58ad71076ff33557704fd7c"} pod="openstack/horizon-8657f7848d-ls2sv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.363721 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" containerID="cri-o://f7c9019de00f5906ef764fd80fe6b9342299dd73c58ad71076ff33557704fd7c" gracePeriod=30 Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.796229 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.921855 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-httpd-config\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.921909 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-internal-tls-certs\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.921951 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-ovndb-tls-certs\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.921972 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g72v8\" (UniqueName: \"kubernetes.io/projected/3d56492b-ec98-47ad-ab19-e6fd24218b91-kube-api-access-g72v8\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.922001 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-public-tls-certs\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.922027 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-config\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.922113 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-combined-ca-bundle\") pod \"3d56492b-ec98-47ad-ab19-e6fd24218b91\" (UID: \"3d56492b-ec98-47ad-ab19-e6fd24218b91\") " Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.946142 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d56492b-ec98-47ad-ab19-e6fd24218b91-kube-api-access-g72v8" (OuterVolumeSpecName: "kube-api-access-g72v8") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "kube-api-access-g72v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:15 crc kubenswrapper[4680]: I0126 16:26:15.980992 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.030508 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.047232 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g72v8\" (UniqueName: \"kubernetes.io/projected/3d56492b-ec98-47ad-ab19-e6fd24218b91-kube-api-access-g72v8\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.151536 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.169775 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65bcdb7d94-8lznk" event={"ID":"01776535-8106-4e24-806b-c150936fbb6f","Type":"ContainerStarted","Data":"cdad2fe950fec467461b760cbcdf99b602d06520f2d77fd2b4d47c6c9b2b5a44"} Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.173685 4680 generic.go:334] "Generic (PLEG): container finished" podID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerID="faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd" exitCode=0 Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.173829 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b859b45b5-vjk86" event={"ID":"3d56492b-ec98-47ad-ab19-e6fd24218b91","Type":"ContainerDied","Data":"faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd"} Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.173904 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b859b45b5-vjk86" event={"ID":"3d56492b-ec98-47ad-ab19-e6fd24218b91","Type":"ContainerDied","Data":"51862ec54444018793f271e8eded522b3515b8e2ba7d5473641011045e1c0e5a"} Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.173996 4680 scope.go:117] "RemoveContainer" containerID="5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.174164 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b859b45b5-vjk86" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.187889 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3c7903b-3d60-439c-ad1a-b9f0c78101d1","Type":"ContainerStarted","Data":"43004f91a07fa205cf3fdc057dbe3ed0d4f63809dbee06e4858e151777cb82a4"} Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.208951 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b85fd5c9-zlmxc" event={"ID":"4e158887-8689-4b75-a22c-fa6e8033190f","Type":"ContainerStarted","Data":"31553f74f3fe8713c1f51db9a2294200098e1084990f8a2c1e049fe4fd00fe89"} Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.227088 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" event={"ID":"39034751-1073-4ef0-b70e-7553c2d9224c","Type":"ContainerStarted","Data":"9cbf522c3f43bb45ee4706f3d28c25f0227a053cf9defc02abfdf942c88cf230"} Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.227272 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.250301 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.267464 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" podStartSLOduration=5.267447528 podStartE2EDuration="5.267447528s" podCreationTimestamp="2026-01-26 16:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:16.25582276 +0000 UTC m=+1251.417095029" watchObservedRunningTime="2026-01-26 16:26:16.267447528 +0000 UTC m=+1251.428719797" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.321037 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-config" (OuterVolumeSpecName: "config") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.352277 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.474866 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.556141 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.646271 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.646321 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3d56492b-ec98-47ad-ab19-e6fd24218b91" (UID: "3d56492b-ec98-47ad-ab19-e6fd24218b91"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.664613 4680 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.664650 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d56492b-ec98-47ad-ab19-e6fd24218b91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.810694 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b859b45b5-vjk86"] Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.827869 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b859b45b5-vjk86"] Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.981913 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.981980 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.982031 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.982822 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30efb2e6cfd89156d3b5b947e16c8c7445b6d65d474e4ed3ab4ec65fec606211"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:26:16 crc kubenswrapper[4680]: I0126 16:26:16.982898 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://30efb2e6cfd89156d3b5b947e16c8c7445b6d65d474e4ed3ab4ec65fec606211" gracePeriod=600 Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.210029 4680 scope.go:117] "RemoveContainer" containerID="faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.217415 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" path="/var/lib/kubelet/pods/3d56492b-ec98-47ad-ab19-e6fd24218b91/volumes" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.290122 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3c7903b-3d60-439c-ad1a-b9f0c78101d1","Type":"ContainerStarted","Data":"c5d81a1a1705e6cec0c7a28fb8e33921c7c487143334cb2e43a477cb2d5f6811"} Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.308877 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b85fd5c9-zlmxc" event={"ID":"4e158887-8689-4b75-a22c-fa6e8033190f","Type":"ContainerStarted","Data":"c9a630ddf8e70d5b7311fcbc603b921cc92ca1efeddb7e2099dfd6d06e4d31ed"} Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.310117 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.317522 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bccd86b4-9a56-4f71-bec5-e4f2ea026725","Type":"ContainerStarted","Data":"86df9d3177fa84ec5d10a5b21b0c73670e969101a19b7ddfbb61e9fc32cde5c5"} Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.323866 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="30efb2e6cfd89156d3b5b947e16c8c7445b6d65d474e4ed3ab4ec65fec606211" exitCode=0 Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.323945 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"30efb2e6cfd89156d3b5b947e16c8c7445b6d65d474e4ed3ab4ec65fec606211"} Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.327886 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65bcdb7d94-8lznk" event={"ID":"01776535-8106-4e24-806b-c150936fbb6f","Type":"ContainerStarted","Data":"8759e61a1de93e28c14e7d4862fb2735b77bde600cf2a68bdaf16e723a6a890a"} Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.328091 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.328316 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.343506 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.98496929 podStartE2EDuration="6.343481761s" podCreationTimestamp="2026-01-26 16:26:11 +0000 UTC" firstStartedPulling="2026-01-26 16:26:12.776735107 +0000 UTC m=+1247.938007376" lastFinishedPulling="2026-01-26 16:26:13.135247578 +0000 UTC m=+1248.296519847" observedRunningTime="2026-01-26 16:26:17.336179255 +0000 UTC m=+1252.497451524" watchObservedRunningTime="2026-01-26 16:26:17.343481761 +0000 UTC m=+1252.504754030" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.368784 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9b85fd5c9-zlmxc" podStartSLOduration=5.368765544 podStartE2EDuration="5.368765544s" podCreationTimestamp="2026-01-26 16:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:17.367673824 +0000 UTC m=+1252.528946093" watchObservedRunningTime="2026-01-26 16:26:17.368765544 +0000 UTC m=+1252.530037803" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.411115 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-65bcdb7d94-8lznk" podStartSLOduration=6.411097278 podStartE2EDuration="6.411097278s" podCreationTimestamp="2026-01-26 16:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:17.40586143 +0000 UTC m=+1252.567133699" watchObservedRunningTime="2026-01-26 16:26:17.411097278 +0000 UTC m=+1252.572369547" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.785810 4680 scope.go:117] "RemoveContainer" containerID="5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508" Jan 26 16:26:17 crc kubenswrapper[4680]: E0126 16:26:17.786353 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508\": container with ID starting with 5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508 not found: ID does not exist" containerID="5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.786391 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508"} err="failed to get container status \"5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508\": rpc error: code = NotFound desc = could not find container \"5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508\": container with ID starting with 5704bc763cd461bda3e078cbb9ad8e1acbca5242c19bf5285da2bd60289e0508 not found: ID does not exist" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.786416 4680 scope.go:117] "RemoveContainer" containerID="faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd" Jan 26 16:26:17 crc kubenswrapper[4680]: E0126 16:26:17.786646 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd\": container with ID starting with faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd not found: ID does not exist" containerID="faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.786663 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd"} err="failed to get container status \"faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd\": rpc error: code = NotFound desc = could not find container \"faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd\": container with ID starting with faeb2640d985f0a1b8dd75a8cab8a2b8c7e58d0fa36ae8f44369a3bce0c5e4dd not found: ID does not exist" Jan 26 16:26:17 crc kubenswrapper[4680]: I0126 16:26:17.786674 4680 scope.go:117] "RemoveContainer" containerID="079abaf394e020c632241b295deb36fe6541d49138372b5520640414dceac2e9" Jan 26 16:26:18 crc kubenswrapper[4680]: I0126 16:26:18.372380 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"b4fcfc1b4abf63ee958fe902223a0c398b190bd8c8128fbc0a7b39068c18c50a"} Jan 26 16:26:18 crc kubenswrapper[4680]: I0126 16:26:18.383091 4680 generic.go:334] "Generic (PLEG): container finished" podID="115b3524-df91-4565-9f2f-c345931095f4" containerID="006f59522ab7a6194fb16f1f4cae39e35cca6dfd2e6889135a34269be78cca8c" exitCode=0 Jan 26 16:26:18 crc kubenswrapper[4680]: I0126 16:26:18.383164 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerDied","Data":"006f59522ab7a6194fb16f1f4cae39e35cca6dfd2e6889135a34269be78cca8c"} Jan 26 16:26:18 crc kubenswrapper[4680]: I0126 16:26:18.386534 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" event={"ID":"3ed333f1-0250-4607-aaac-55700afea2b8","Type":"ContainerStarted","Data":"24bac58604e6aa87e4897ded9d11af6937b05ca6ed4a56be77803dcebcf3def2"} Jan 26 16:26:18 crc kubenswrapper[4680]: I0126 16:26:18.388358 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" event={"ID":"897bb792-a70a-409f-9d9e-d15b0506cb42","Type":"ContainerStarted","Data":"c729912a51c62ffbbfe2c3b378153b526d88b0fed61e1cc772a9de7b43f26456"} Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.021510 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bb54694f4-nlzbq"] Jan 26 16:26:19 crc kubenswrapper[4680]: E0126 16:26:19.022120 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-api" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022134 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-api" Jan 26 16:26:19 crc kubenswrapper[4680]: E0126 16:26:19.022149 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022155 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon" Jan 26 16:26:19 crc kubenswrapper[4680]: E0126 16:26:19.022165 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-httpd" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022171 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-httpd" Jan 26 16:26:19 crc kubenswrapper[4680]: E0126 16:26:19.022189 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon-log" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022195 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon-log" Jan 26 16:26:19 crc kubenswrapper[4680]: E0126 16:26:19.022203 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022209 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon" Jan 26 16:26:19 crc kubenswrapper[4680]: E0126 16:26:19.022218 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon-log" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022223 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon-log" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022397 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-httpd" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022477 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon-log" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022490 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d52a893-b89a-4ee1-b056-78a94a87ac96" containerName="horizon" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022504 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022511 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d56492b-ec98-47ad-ab19-e6fd24218b91" containerName="neutron-api" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.022520 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3c8f55-5256-479d-a7e8-3b42ec63414c" containerName="horizon-log" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.023517 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.027716 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.046278 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.052254 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bb54694f4-nlzbq"] Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123163 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt4bp\" (UniqueName: \"kubernetes.io/projected/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-kube-api-access-gt4bp\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123241 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-public-tls-certs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123271 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-config-data\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123296 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-config-data-custom\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123327 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-logs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123346 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-combined-ca-bundle\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.123362 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-internal-tls-certs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.224857 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-config-data\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.224915 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-config-data-custom\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.224957 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-logs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.224982 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-combined-ca-bundle\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.225003 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-internal-tls-certs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.225143 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt4bp\" (UniqueName: \"kubernetes.io/projected/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-kube-api-access-gt4bp\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.225183 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-public-tls-certs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.226718 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-logs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.245016 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-config-data-custom\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.244945 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-internal-tls-certs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.245386 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-public-tls-certs\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.252685 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-combined-ca-bundle\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.256435 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-config-data\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.264648 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt4bp\" (UniqueName: \"kubernetes.io/projected/f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d-kube-api-access-gt4bp\") pod \"barbican-api-6bb54694f4-nlzbq\" (UID: \"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d\") " pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.349482 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.464418 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bccd86b4-9a56-4f71-bec5-e4f2ea026725","Type":"ContainerStarted","Data":"80ee147a3e41f4ff1f3d6f852542f319218168b85000a2a337b336595053684e"} Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.464611 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api-log" containerID="cri-o://86df9d3177fa84ec5d10a5b21b0c73670e969101a19b7ddfbb61e9fc32cde5c5" gracePeriod=30 Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.464708 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.465090 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api" containerID="cri-o://80ee147a3e41f4ff1f3d6f852542f319218168b85000a2a337b336595053684e" gracePeriod=30 Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.490631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" event={"ID":"3ed333f1-0250-4607-aaac-55700afea2b8","Type":"ContainerStarted","Data":"694e0c38b1c398b736808fb102caf4333ed27c3c875951d9076b1748ca364bf7"} Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.506249 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" event={"ID":"897bb792-a70a-409f-9d9e-d15b0506cb42","Type":"ContainerStarted","Data":"60c4812453e88b37f43d5a3b53f87318a12b34ee1f024896e7508d2544f4a165"} Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.524338 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.524315412 podStartE2EDuration="8.524315412s" podCreationTimestamp="2026-01-26 16:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:19.508449034 +0000 UTC m=+1254.669721303" watchObservedRunningTime="2026-01-26 16:26:19.524315412 +0000 UTC m=+1254.685587681" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.556123 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77566b99db-dfsnv" podStartSLOduration=3.8965812079999997 podStartE2EDuration="8.556104158s" podCreationTimestamp="2026-01-26 16:26:11 +0000 UTC" firstStartedPulling="2026-01-26 16:26:13.213085343 +0000 UTC m=+1248.374357602" lastFinishedPulling="2026-01-26 16:26:17.872608283 +0000 UTC m=+1253.033880552" observedRunningTime="2026-01-26 16:26:19.556097278 +0000 UTC m=+1254.717369547" watchObservedRunningTime="2026-01-26 16:26:19.556104158 +0000 UTC m=+1254.717376427" Jan 26 16:26:19 crc kubenswrapper[4680]: I0126 16:26:19.596092 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c9b6cc8f5-jfsbx" podStartSLOduration=3.833041856 podStartE2EDuration="8.596051535s" podCreationTimestamp="2026-01-26 16:26:11 +0000 UTC" firstStartedPulling="2026-01-26 16:26:13.091260847 +0000 UTC m=+1248.252533116" lastFinishedPulling="2026-01-26 16:26:17.854270526 +0000 UTC m=+1253.015542795" observedRunningTime="2026-01-26 16:26:19.582269046 +0000 UTC m=+1254.743541315" watchObservedRunningTime="2026-01-26 16:26:19.596051535 +0000 UTC m=+1254.757323804" Jan 26 16:26:20 crc kubenswrapper[4680]: I0126 16:26:20.008703 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bb54694f4-nlzbq"] Jan 26 16:26:20 crc kubenswrapper[4680]: I0126 16:26:20.517171 4680 generic.go:334] "Generic (PLEG): container finished" podID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerID="86df9d3177fa84ec5d10a5b21b0c73670e969101a19b7ddfbb61e9fc32cde5c5" exitCode=143 Jan 26 16:26:20 crc kubenswrapper[4680]: I0126 16:26:20.517445 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bccd86b4-9a56-4f71-bec5-e4f2ea026725","Type":"ContainerDied","Data":"86df9d3177fa84ec5d10a5b21b0c73670e969101a19b7ddfbb61e9fc32cde5c5"} Jan 26 16:26:20 crc kubenswrapper[4680]: I0126 16:26:20.519670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb54694f4-nlzbq" event={"ID":"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d","Type":"ContainerStarted","Data":"030edf059debc9cf4f4f5208d3c333d039b5e4ad184d6d47a2f91ad5de2a5f52"} Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.443877 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.555738 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb54694f4-nlzbq" event={"ID":"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d","Type":"ContainerStarted","Data":"5a4af2c54b0287c907c7aa38857e252f8bb7952757f7b64e0fdcbdd5e4525a06"} Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.555791 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb54694f4-nlzbq" event={"ID":"f7a42902-ad01-4a87-bf6b-7bd2e33f6d3d","Type":"ContainerStarted","Data":"23acfbe379882024aef0d409215e90f025ade2e7cd9d45357dda55bd9cda0a29"} Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.555898 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.580831 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bb54694f4-nlzbq" podStartSLOduration=3.5808137650000003 podStartE2EDuration="3.580813765s" podCreationTimestamp="2026-01-26 16:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:21.57175697 +0000 UTC m=+1256.733029239" watchObservedRunningTime="2026-01-26 16:26:21.580813765 +0000 UTC m=+1256.742086024" Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.841378 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:26:21 crc kubenswrapper[4680]: I0126 16:26:21.955052 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.040660 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bd5fc7bfb-cc2hd" Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.165872 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.261858 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cbcb956b7-s7klz"] Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.262101 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" containerName="dnsmasq-dns" containerID="cri-o://72e40a33377b634e69157219982d6f7a95684cd7b3f6b05fa8c0fd67d0fd72bf" gracePeriod=10 Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.568294 4680 generic.go:334] "Generic (PLEG): container finished" podID="e7f2efbe-2395-47cf-81d7-990164716cda" containerID="72e40a33377b634e69157219982d6f7a95684cd7b3f6b05fa8c0fd67d0fd72bf" exitCode=0 Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.569106 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" event={"ID":"e7f2efbe-2395-47cf-81d7-990164716cda","Type":"ContainerDied","Data":"72e40a33377b634e69157219982d6f7a95684cd7b3f6b05fa8c0fd67d0fd72bf"} Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.569547 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:22 crc kubenswrapper[4680]: I0126 16:26:22.925822 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.013750 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-svc\") pod \"e7f2efbe-2395-47cf-81d7-990164716cda\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.013834 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-sb\") pod \"e7f2efbe-2395-47cf-81d7-990164716cda\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.013949 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-swift-storage-0\") pod \"e7f2efbe-2395-47cf-81d7-990164716cda\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.013982 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntltf\" (UniqueName: \"kubernetes.io/projected/e7f2efbe-2395-47cf-81d7-990164716cda-kube-api-access-ntltf\") pod \"e7f2efbe-2395-47cf-81d7-990164716cda\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.014034 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-config\") pod \"e7f2efbe-2395-47cf-81d7-990164716cda\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.014112 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-nb\") pod \"e7f2efbe-2395-47cf-81d7-990164716cda\" (UID: \"e7f2efbe-2395-47cf-81d7-990164716cda\") " Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.026026 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f2efbe-2395-47cf-81d7-990164716cda-kube-api-access-ntltf" (OuterVolumeSpecName: "kube-api-access-ntltf") pod "e7f2efbe-2395-47cf-81d7-990164716cda" (UID: "e7f2efbe-2395-47cf-81d7-990164716cda"). InnerVolumeSpecName "kube-api-access-ntltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.116261 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntltf\" (UniqueName: \"kubernetes.io/projected/e7f2efbe-2395-47cf-81d7-990164716cda-kube-api-access-ntltf\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.129722 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7f2efbe-2395-47cf-81d7-990164716cda" (UID: "e7f2efbe-2395-47cf-81d7-990164716cda"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.133534 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7f2efbe-2395-47cf-81d7-990164716cda" (UID: "e7f2efbe-2395-47cf-81d7-990164716cda"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.166353 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7f2efbe-2395-47cf-81d7-990164716cda" (UID: "e7f2efbe-2395-47cf-81d7-990164716cda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.167027 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7f2efbe-2395-47cf-81d7-990164716cda" (UID: "e7f2efbe-2395-47cf-81d7-990164716cda"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.182441 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-config" (OuterVolumeSpecName: "config") pod "e7f2efbe-2395-47cf-81d7-990164716cda" (UID: "e7f2efbe-2395-47cf-81d7-990164716cda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.218426 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.218456 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.218468 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.218481 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.218492 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f2efbe-2395-47cf-81d7-990164716cda-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.582106 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.582099 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbcb956b7-s7klz" event={"ID":"e7f2efbe-2395-47cf-81d7-990164716cda","Type":"ContainerDied","Data":"82c7fedd77ee5c6ed687e179fd6c2f7be4d65c62fa655028a1e37fb85d5b0695"} Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.582269 4680 scope.go:117] "RemoveContainer" containerID="72e40a33377b634e69157219982d6f7a95684cd7b3f6b05fa8c0fd67d0fd72bf" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.607416 4680 scope.go:117] "RemoveContainer" containerID="5eebfee23e52d1f5649aacf8a93e7fbc7254105b6d312d7a14c038d3600ef3a2" Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.612153 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cbcb956b7-s7klz"] Jan 26 16:26:23 crc kubenswrapper[4680]: I0126 16:26:23.631857 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cbcb956b7-s7klz"] Jan 26 16:26:25 crc kubenswrapper[4680]: I0126 16:26:25.128813 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:25 crc kubenswrapper[4680]: I0126 16:26:25.216355 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" path="/var/lib/kubelet/pods/e7f2efbe-2395-47cf-81d7-990164716cda/volumes" Jan 26 16:26:25 crc kubenswrapper[4680]: I0126 16:26:25.471812 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 16:26:25 crc kubenswrapper[4680]: I0126 16:26:25.638670 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:26 crc kubenswrapper[4680]: I0126 16:26:26.459341 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 16:26:26 crc kubenswrapper[4680]: I0126 16:26:26.520001 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:26 crc kubenswrapper[4680]: I0126 16:26:26.607711 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="cinder-scheduler" containerID="cri-o://43004f91a07fa205cf3fdc057dbe3ed0d4f63809dbee06e4858e151777cb82a4" gracePeriod=30 Jan 26 16:26:26 crc kubenswrapper[4680]: I0126 16:26:26.608142 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="probe" containerID="cri-o://c5d81a1a1705e6cec0c7a28fb8e33921c7c487143334cb2e43a477cb2d5f6811" gracePeriod=30 Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.623931 4680 generic.go:334] "Generic (PLEG): container finished" podID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerID="c5d81a1a1705e6cec0c7a28fb8e33921c7c487143334cb2e43a477cb2d5f6811" exitCode=0 Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.624370 4680 generic.go:334] "Generic (PLEG): container finished" podID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerID="43004f91a07fa205cf3fdc057dbe3ed0d4f63809dbee06e4858e151777cb82a4" exitCode=0 Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.624391 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3c7903b-3d60-439c-ad1a-b9f0c78101d1","Type":"ContainerDied","Data":"c5d81a1a1705e6cec0c7a28fb8e33921c7c487143334cb2e43a477cb2d5f6811"} Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.624417 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3c7903b-3d60-439c-ad1a-b9f0c78101d1","Type":"ContainerDied","Data":"43004f91a07fa205cf3fdc057dbe3ed0d4f63809dbee06e4858e151777cb82a4"} Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.841654 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.945662 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-combined-ca-bundle\") pod \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.946012 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hbx5\" (UniqueName: \"kubernetes.io/projected/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-kube-api-access-5hbx5\") pod \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.946095 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-scripts\") pod \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.946217 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data\") pod \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.946241 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-etc-machine-id\") pod \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.946266 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data-custom\") pod \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\" (UID: \"b3c7903b-3d60-439c-ad1a-b9f0c78101d1\") " Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.952924 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-kube-api-access-5hbx5" (OuterVolumeSpecName: "kube-api-access-5hbx5") pod "b3c7903b-3d60-439c-ad1a-b9f0c78101d1" (UID: "b3c7903b-3d60-439c-ad1a-b9f0c78101d1"). InnerVolumeSpecName "kube-api-access-5hbx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.952978 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b3c7903b-3d60-439c-ad1a-b9f0c78101d1" (UID: "b3c7903b-3d60-439c-ad1a-b9f0c78101d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.961254 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-scripts" (OuterVolumeSpecName: "scripts") pod "b3c7903b-3d60-439c-ad1a-b9f0c78101d1" (UID: "b3c7903b-3d60-439c-ad1a-b9f0c78101d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:28 crc kubenswrapper[4680]: I0126 16:26:28.963190 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b3c7903b-3d60-439c-ad1a-b9f0c78101d1" (UID: "b3c7903b-3d60-439c-ad1a-b9f0c78101d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.032212 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3c7903b-3d60-439c-ad1a-b9f0c78101d1" (UID: "b3c7903b-3d60-439c-ad1a-b9f0c78101d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.048715 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.048749 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hbx5\" (UniqueName: \"kubernetes.io/projected/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-kube-api-access-5hbx5\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.048761 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.048769 4680 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.048777 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.108820 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data" (OuterVolumeSpecName: "config-data") pod "b3c7903b-3d60-439c-ad1a-b9f0c78101d1" (UID: "b3c7903b-3d60-439c-ad1a-b9f0c78101d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.150287 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7903b-3d60-439c-ad1a-b9f0c78101d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.635756 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3c7903b-3d60-439c-ad1a-b9f0c78101d1","Type":"ContainerDied","Data":"285a6488c2693a49fec060be54c729d05093b9449072d89bef031b089843ac5e"} Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.635804 4680 scope.go:117] "RemoveContainer" containerID="c5d81a1a1705e6cec0c7a28fb8e33921c7c487143334cb2e43a477cb2d5f6811" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.635963 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.679861 4680 scope.go:117] "RemoveContainer" containerID="43004f91a07fa205cf3fdc057dbe3ed0d4f63809dbee06e4858e151777cb82a4" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.681182 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.703574 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742216 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:29 crc kubenswrapper[4680]: E0126 16:26:29.742646 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" containerName="init" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742659 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" containerName="init" Jan 26 16:26:29 crc kubenswrapper[4680]: E0126 16:26:29.742672 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="probe" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742677 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="probe" Jan 26 16:26:29 crc kubenswrapper[4680]: E0126 16:26:29.742693 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" containerName="dnsmasq-dns" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742699 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" containerName="dnsmasq-dns" Jan 26 16:26:29 crc kubenswrapper[4680]: E0126 16:26:29.742710 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="cinder-scheduler" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742715 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="cinder-scheduler" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742883 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f2efbe-2395-47cf-81d7-990164716cda" containerName="dnsmasq-dns" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742898 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="probe" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.742917 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" containerName="cinder-scheduler" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.743982 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.748285 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.752131 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.877653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/335cf387-3a97-45df-b399-7e2d6de829b0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.877940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-scripts\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.877976 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-config-data\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.877999 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.878027 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.881669 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjdb\" (UniqueName: \"kubernetes.io/projected/335cf387-3a97-45df-b399-7e2d6de829b0-kube-api-access-ftjdb\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.983412 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/335cf387-3a97-45df-b399-7e2d6de829b0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.983506 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-scripts\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.983550 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-config-data\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.983577 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.983608 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.983655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjdb\" (UniqueName: \"kubernetes.io/projected/335cf387-3a97-45df-b399-7e2d6de829b0-kube-api-access-ftjdb\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.984014 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/335cf387-3a97-45df-b399-7e2d6de829b0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.992659 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.995566 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-scripts\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:29 crc kubenswrapper[4680]: I0126 16:26:29.995988 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-config-data\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:30 crc kubenswrapper[4680]: I0126 16:26:30.010769 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjdb\" (UniqueName: \"kubernetes.io/projected/335cf387-3a97-45df-b399-7e2d6de829b0-kube-api-access-ftjdb\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:30 crc kubenswrapper[4680]: I0126 16:26:30.011425 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/335cf387-3a97-45df-b399-7e2d6de829b0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"335cf387-3a97-45df-b399-7e2d6de829b0\") " pod="openstack/cinder-scheduler-0" Jan 26 16:26:30 crc kubenswrapper[4680]: I0126 16:26:30.117908 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:26:30 crc kubenswrapper[4680]: I0126 16:26:30.311704 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-84ff475c8b-dxr5b" Jan 26 16:26:30 crc kubenswrapper[4680]: I0126 16:26:30.681904 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65bcdb7d94-8lznk" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:26:30 crc kubenswrapper[4680]: I0126 16:26:30.900377 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:26:30 crc kubenswrapper[4680]: W0126 16:26:30.915725 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod335cf387_3a97_45df_b399_7e2d6de829b0.slice/crio-10b1daf71de76e1b3f12adfe9ca5bc24187983233599d4e6449f5a0a1d0674dd WatchSource:0}: Error finding container 10b1daf71de76e1b3f12adfe9ca5bc24187983233599d4e6449f5a0a1d0674dd: Status 404 returned error can't find the container with id 10b1daf71de76e1b3f12adfe9ca5bc24187983233599d4e6449f5a0a1d0674dd Jan 26 16:26:31 crc kubenswrapper[4680]: I0126 16:26:31.184664 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c7903b-3d60-439c-ad1a-b9f0c78101d1" path="/var/lib/kubelet/pods/b3c7903b-3d60-439c-ad1a-b9f0c78101d1/volumes" Jan 26 16:26:31 crc kubenswrapper[4680]: I0126 16:26:31.666126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"335cf387-3a97-45df-b399-7e2d6de829b0","Type":"ContainerStarted","Data":"10b1daf71de76e1b3f12adfe9ca5bc24187983233599d4e6449f5a0a1d0674dd"} Jan 26 16:26:32 crc kubenswrapper[4680]: I0126 16:26:32.133409 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:26:32 crc kubenswrapper[4680]: I0126 16:26:32.675438 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"335cf387-3a97-45df-b399-7e2d6de829b0","Type":"ContainerStarted","Data":"8f4eb89ee0c42b83dec6ad2a43095db70daca8fc45fea053f234b7958e4b84e1"} Jan 26 16:26:32 crc kubenswrapper[4680]: I0126 16:26:32.676113 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"335cf387-3a97-45df-b399-7e2d6de829b0","Type":"ContainerStarted","Data":"48ab56dbf1aa974834eaa8218126b6ac44c9094fdfd06ee9b48f958aab6dd140"} Jan 26 16:26:32 crc kubenswrapper[4680]: I0126 16:26:32.696120 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.69610385 podStartE2EDuration="3.69610385s" podCreationTimestamp="2026-01-26 16:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:32.692841758 +0000 UTC m=+1267.854114027" watchObservedRunningTime="2026-01-26 16:26:32.69610385 +0000 UTC m=+1267.857376119" Jan 26 16:26:33 crc kubenswrapper[4680]: I0126 16:26:33.239526 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:33 crc kubenswrapper[4680]: I0126 16:26:33.513627 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bb54694f4-nlzbq" Jan 26 16:26:33 crc kubenswrapper[4680]: I0126 16:26:33.597868 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65bcdb7d94-8lznk"] Jan 26 16:26:33 crc kubenswrapper[4680]: I0126 16:26:33.598420 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65bcdb7d94-8lznk" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api-log" containerID="cri-o://cdad2fe950fec467461b760cbcdf99b602d06520f2d77fd2b4d47c6c9b2b5a44" gracePeriod=30 Jan 26 16:26:33 crc kubenswrapper[4680]: I0126 16:26:33.598559 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65bcdb7d94-8lznk" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api" containerID="cri-o://8759e61a1de93e28c14e7d4862fb2735b77bde600cf2a68bdaf16e723a6a890a" gracePeriod=30 Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.698117 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.699294 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.704814 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bpg7v" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.705266 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.707668 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.712936 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.714434 4680 generic.go:334] "Generic (PLEG): container finished" podID="01776535-8106-4e24-806b-c150936fbb6f" containerID="cdad2fe950fec467461b760cbcdf99b602d06520f2d77fd2b4d47c6c9b2b5a44" exitCode=143 Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.714475 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65bcdb7d94-8lznk" event={"ID":"01776535-8106-4e24-806b-c150936fbb6f","Type":"ContainerDied","Data":"cdad2fe950fec467461b760cbcdf99b602d06520f2d77fd2b4d47c6c9b2b5a44"} Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.783935 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87934669-654f-41be-ae1f-4805fe0619f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.784123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87934669-654f-41be-ae1f-4805fe0619f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.784147 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24mz\" (UniqueName: \"kubernetes.io/projected/87934669-654f-41be-ae1f-4805fe0619f1-kube-api-access-t24mz\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.784182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87934669-654f-41be-ae1f-4805fe0619f1-openstack-config\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.886267 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87934669-654f-41be-ae1f-4805fe0619f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.886318 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24mz\" (UniqueName: \"kubernetes.io/projected/87934669-654f-41be-ae1f-4805fe0619f1-kube-api-access-t24mz\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.886367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87934669-654f-41be-ae1f-4805fe0619f1-openstack-config\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.886420 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87934669-654f-41be-ae1f-4805fe0619f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.887414 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87934669-654f-41be-ae1f-4805fe0619f1-openstack-config\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.894718 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87934669-654f-41be-ae1f-4805fe0619f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.906690 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87934669-654f-41be-ae1f-4805fe0619f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:34 crc kubenswrapper[4680]: I0126 16:26:34.910549 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24mz\" (UniqueName: \"kubernetes.io/projected/87934669-654f-41be-ae1f-4805fe0619f1-kube-api-access-t24mz\") pod \"openstackclient\" (UID: \"87934669-654f-41be-ae1f-4805fe0619f1\") " pod="openstack/openstackclient" Jan 26 16:26:35 crc kubenswrapper[4680]: I0126 16:26:35.015371 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:26:35 crc kubenswrapper[4680]: I0126 16:26:35.120058 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 16:26:35 crc kubenswrapper[4680]: I0126 16:26:35.580554 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 16:26:35 crc kubenswrapper[4680]: W0126 16:26:35.584467 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87934669_654f_41be_ae1f_4805fe0619f1.slice/crio-77adf25dd9ce32bca1144955d90d06e322fe754ced97ee6c2566eba6226ffc55 WatchSource:0}: Error finding container 77adf25dd9ce32bca1144955d90d06e322fe754ced97ee6c2566eba6226ffc55: Status 404 returned error can't find the container with id 77adf25dd9ce32bca1144955d90d06e322fe754ced97ee6c2566eba6226ffc55 Jan 26 16:26:35 crc kubenswrapper[4680]: I0126 16:26:35.699984 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 16:26:35 crc kubenswrapper[4680]: I0126 16:26:35.723289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"87934669-654f-41be-ae1f-4805fe0619f1","Type":"ContainerStarted","Data":"77adf25dd9ce32bca1144955d90d06e322fe754ced97ee6c2566eba6226ffc55"} Jan 26 16:26:37 crc kubenswrapper[4680]: I0126 16:26:37.394547 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65bcdb7d94-8lznk" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:51910->10.217.0.165:9311: read: connection reset by peer" Jan 26 16:26:37 crc kubenswrapper[4680]: I0126 16:26:37.394627 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65bcdb7d94-8lznk" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:51894->10.217.0.165:9311: read: connection reset by peer" Jan 26 16:26:37 crc kubenswrapper[4680]: I0126 16:26:37.788732 4680 generic.go:334] "Generic (PLEG): container finished" podID="01776535-8106-4e24-806b-c150936fbb6f" containerID="8759e61a1de93e28c14e7d4862fb2735b77bde600cf2a68bdaf16e723a6a890a" exitCode=0 Jan 26 16:26:37 crc kubenswrapper[4680]: I0126 16:26:37.788782 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65bcdb7d94-8lznk" event={"ID":"01776535-8106-4e24-806b-c150936fbb6f","Type":"ContainerDied","Data":"8759e61a1de93e28c14e7d4862fb2735b77bde600cf2a68bdaf16e723a6a890a"} Jan 26 16:26:37 crc kubenswrapper[4680]: I0126 16:26:37.931512 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.043548 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data\") pod \"01776535-8106-4e24-806b-c150936fbb6f\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.043661 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-combined-ca-bundle\") pod \"01776535-8106-4e24-806b-c150936fbb6f\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.043726 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data-custom\") pod \"01776535-8106-4e24-806b-c150936fbb6f\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.043758 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01776535-8106-4e24-806b-c150936fbb6f-logs\") pod \"01776535-8106-4e24-806b-c150936fbb6f\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.043872 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p58k4\" (UniqueName: \"kubernetes.io/projected/01776535-8106-4e24-806b-c150936fbb6f-kube-api-access-p58k4\") pod \"01776535-8106-4e24-806b-c150936fbb6f\" (UID: \"01776535-8106-4e24-806b-c150936fbb6f\") " Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.045167 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01776535-8106-4e24-806b-c150936fbb6f-logs" (OuterVolumeSpecName: "logs") pod "01776535-8106-4e24-806b-c150936fbb6f" (UID: "01776535-8106-4e24-806b-c150936fbb6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.053614 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01776535-8106-4e24-806b-c150936fbb6f" (UID: "01776535-8106-4e24-806b-c150936fbb6f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.066276 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01776535-8106-4e24-806b-c150936fbb6f-kube-api-access-p58k4" (OuterVolumeSpecName: "kube-api-access-p58k4") pod "01776535-8106-4e24-806b-c150936fbb6f" (UID: "01776535-8106-4e24-806b-c150936fbb6f"). InnerVolumeSpecName "kube-api-access-p58k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.085197 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01776535-8106-4e24-806b-c150936fbb6f" (UID: "01776535-8106-4e24-806b-c150936fbb6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.126797 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data" (OuterVolumeSpecName: "config-data") pod "01776535-8106-4e24-806b-c150936fbb6f" (UID: "01776535-8106-4e24-806b-c150936fbb6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.146634 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.146668 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.146681 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01776535-8106-4e24-806b-c150936fbb6f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.146694 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01776535-8106-4e24-806b-c150936fbb6f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.146706 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p58k4\" (UniqueName: \"kubernetes.io/projected/01776535-8106-4e24-806b-c150936fbb6f-kube-api-access-p58k4\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.715736 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5478f87d6c-x9mh9"] Jan 26 16:26:38 crc kubenswrapper[4680]: E0126 16:26:38.716517 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api-log" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.716533 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api-log" Jan 26 16:26:38 crc kubenswrapper[4680]: E0126 16:26:38.716552 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.716558 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.733350 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api-log" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.733420 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="01776535-8106-4e24-806b-c150936fbb6f" containerName="barbican-api" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.735312 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.740528 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.749061 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-jtbml" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.749396 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.763905 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5478f87d6c-x9mh9"] Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884533 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65bcdb7d94-8lznk" event={"ID":"01776535-8106-4e24-806b-c150936fbb6f","Type":"ContainerDied","Data":"bea85aa4348e339511ebcf1c5398616985d8df2b7810b5332ddb512cdc828e2d"} Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884566 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data-custom\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884618 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6whrt\" (UniqueName: \"kubernetes.io/projected/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-kube-api-access-6whrt\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884658 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65bcdb7d94-8lznk" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884865 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884897 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-combined-ca-bundle\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.884585 4680 scope.go:117] "RemoveContainer" containerID="8759e61a1de93e28c14e7d4862fb2735b77bde600cf2a68bdaf16e723a6a890a" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.934240 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57966556c4-5mgs4"] Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.935832 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.951889 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.977497 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78b86675f-bmh7k"] Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.979463 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.993720 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.993769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-combined-ca-bundle\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.993847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data-custom\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:38 crc kubenswrapper[4680]: I0126 16:26:38.993868 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6whrt\" (UniqueName: \"kubernetes.io/projected/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-kube-api-access-6whrt\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.015636 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data-custom\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.020768 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.028190 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-combined-ca-bundle\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.037460 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6whrt\" (UniqueName: \"kubernetes.io/projected/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-kube-api-access-6whrt\") pod \"heat-engine-5478f87d6c-x9mh9\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.074224 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57966556c4-5mgs4"] Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.081280 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.081872 4680 scope.go:117] "RemoveContainer" containerID="cdad2fe950fec467461b760cbcdf99b602d06520f2d77fd2b4d47c6c9b2b5a44" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.093650 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78b86675f-bmh7k"] Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.099809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-sb\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.099886 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-nb\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.099913 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vg7r\" (UniqueName: \"kubernetes.io/projected/65bc9e91-e18f-4da0-a068-1a2f5199068f-kube-api-access-2vg7r\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.099973 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-combined-ca-bundle\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.100001 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.100024 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-svc\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.100055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data-custom\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.103900 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plbrc\" (UniqueName: \"kubernetes.io/projected/9599857d-051f-4a93-8c81-af5e73f5e087-kube-api-access-plbrc\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.103978 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-config\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.104215 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-swift-storage-0\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.112278 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65bcdb7d94-8lznk"] Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.125214 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-65bcdb7d94-8lznk"] Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.142136 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-85875d88f7-m4tq6"] Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.145672 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.156311 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.156775 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85875d88f7-m4tq6"] Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.202533 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01776535-8106-4e24-806b-c150936fbb6f" path="/var/lib/kubelet/pods/01776535-8106-4e24-806b-c150936fbb6f/volumes" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206333 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-swift-storage-0\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206509 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-sb\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206595 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data-custom\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206683 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-nb\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206771 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vg7r\" (UniqueName: \"kubernetes.io/projected/65bc9e91-e18f-4da0-a068-1a2f5199068f-kube-api-access-2vg7r\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206875 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-combined-ca-bundle\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.206950 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207015 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-svc\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207108 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207195 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data-custom\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207280 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plbrc\" (UniqueName: \"kubernetes.io/projected/9599857d-051f-4a93-8c81-af5e73f5e087-kube-api-access-plbrc\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207353 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-config\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207427 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrqk\" (UniqueName: \"kubernetes.io/projected/8f0cd221-e6ba-4921-9c2b-49e6424cd321-kube-api-access-wlrqk\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.207499 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-combined-ca-bundle\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.208378 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-swift-storage-0\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.209379 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-sb\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.210672 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-nb\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.214006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-combined-ca-bundle\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.214966 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-config\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.216223 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-svc\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.222034 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data-custom\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.239236 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.248933 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plbrc\" (UniqueName: \"kubernetes.io/projected/9599857d-051f-4a93-8c81-af5e73f5e087-kube-api-access-plbrc\") pod \"dnsmasq-dns-78b86675f-bmh7k\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.272043 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vg7r\" (UniqueName: \"kubernetes.io/projected/65bc9e91-e18f-4da0-a068-1a2f5199068f-kube-api-access-2vg7r\") pod \"heat-cfnapi-57966556c4-5mgs4\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.276485 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.309044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data-custom\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.309260 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.309316 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrqk\" (UniqueName: \"kubernetes.io/projected/8f0cd221-e6ba-4921-9c2b-49e6424cd321-kube-api-access-wlrqk\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.309361 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-combined-ca-bundle\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.315644 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data-custom\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.317865 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.320180 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-combined-ca-bundle\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.333297 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrqk\" (UniqueName: \"kubernetes.io/projected/8f0cd221-e6ba-4921-9c2b-49e6424cd321-kube-api-access-wlrqk\") pod \"heat-api-85875d88f7-m4tq6\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.413058 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.621691 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:39 crc kubenswrapper[4680]: I0126 16:26:39.948344 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5478f87d6c-x9mh9"] Jan 26 16:26:39 crc kubenswrapper[4680]: W0126 16:26:39.978788 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ebf7926_1f6f_4f6e_a57d_e4cb3a1b1a79.slice/crio-840677c813be7e91d795b3579f4a112cceeecb58adc37af98822588bcf46d39f WatchSource:0}: Error finding container 840677c813be7e91d795b3579f4a112cceeecb58adc37af98822588bcf46d39f: Status 404 returned error can't find the container with id 840677c813be7e91d795b3579f4a112cceeecb58adc37af98822588bcf46d39f Jan 26 16:26:40 crc kubenswrapper[4680]: I0126 16:26:40.303162 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57966556c4-5mgs4"] Jan 26 16:26:40 crc kubenswrapper[4680]: I0126 16:26:40.345576 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78b86675f-bmh7k"] Jan 26 16:26:40 crc kubenswrapper[4680]: W0126 16:26:40.375946 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9599857d_051f_4a93_8c81_af5e73f5e087.slice/crio-1b91fdc8a3e92b2d3befa90d596138f74f8ebb6dcff64d8b4bd7bd194c8a71fd WatchSource:0}: Error finding container 1b91fdc8a3e92b2d3befa90d596138f74f8ebb6dcff64d8b4bd7bd194c8a71fd: Status 404 returned error can't find the container with id 1b91fdc8a3e92b2d3befa90d596138f74f8ebb6dcff64d8b4bd7bd194c8a71fd Jan 26 16:26:40 crc kubenswrapper[4680]: I0126 16:26:40.524926 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85875d88f7-m4tq6"] Jan 26 16:26:40 crc kubenswrapper[4680]: I0126 16:26:40.640215 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 16:26:40 crc kubenswrapper[4680]: I0126 16:26:40.972010 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" event={"ID":"9599857d-051f-4a93-8c81-af5e73f5e087","Type":"ContainerStarted","Data":"1b91fdc8a3e92b2d3befa90d596138f74f8ebb6dcff64d8b4bd7bd194c8a71fd"} Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.000767 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5478f87d6c-x9mh9" event={"ID":"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79","Type":"ContainerStarted","Data":"5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d"} Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.000815 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5478f87d6c-x9mh9" event={"ID":"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79","Type":"ContainerStarted","Data":"840677c813be7e91d795b3579f4a112cceeecb58adc37af98822588bcf46d39f"} Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.022528 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57966556c4-5mgs4" event={"ID":"65bc9e91-e18f-4da0-a068-1a2f5199068f","Type":"ContainerStarted","Data":"bac481a4776d7513b511f013cf28b1321ee8b679b1f5210a355ca43674e873c7"} Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.040604 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85875d88f7-m4tq6" event={"ID":"8f0cd221-e6ba-4921-9c2b-49e6424cd321","Type":"ContainerStarted","Data":"a79bb6b7cc79809dd7099d3cf60e1357e18ce4403359860f7c7e78057f73cbb7"} Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.053607 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5478f87d6c-x9mh9" podStartSLOduration=3.053590364 podStartE2EDuration="3.053590364s" podCreationTimestamp="2026-01-26 16:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:41.048931663 +0000 UTC m=+1276.210203932" watchObservedRunningTime="2026-01-26 16:26:41.053590364 +0000 UTC m=+1276.214862633" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.582031 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-744f4bf557-dr6ng"] Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.590199 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.597870 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-744f4bf557-dr6ng"] Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.631721 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.631937 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.632161 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683016 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-internal-tls-certs\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683060 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-public-tls-certs\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683098 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9139f2-446c-49ea-9d61-f1d48df4998b-log-httpd\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683245 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d9139f2-446c-49ea-9d61-f1d48df4998b-etc-swift\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683404 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-combined-ca-bundle\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683479 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9139f2-446c-49ea-9d61-f1d48df4998b-run-httpd\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683550 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-config-data\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.683594 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9c8\" (UniqueName: \"kubernetes.io/projected/9d9139f2-446c-49ea-9d61-f1d48df4998b-kube-api-access-jn9c8\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785209 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-internal-tls-certs\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785259 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-public-tls-certs\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785287 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9139f2-446c-49ea-9d61-f1d48df4998b-log-httpd\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785323 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d9139f2-446c-49ea-9d61-f1d48df4998b-etc-swift\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785360 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-combined-ca-bundle\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785393 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9139f2-446c-49ea-9d61-f1d48df4998b-run-httpd\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785417 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-config-data\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.785440 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9c8\" (UniqueName: \"kubernetes.io/projected/9d9139f2-446c-49ea-9d61-f1d48df4998b-kube-api-access-jn9c8\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.786483 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9139f2-446c-49ea-9d61-f1d48df4998b-run-httpd\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.786564 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9139f2-446c-49ea-9d61-f1d48df4998b-log-httpd\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.790609 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-internal-tls-certs\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.791538 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d9139f2-446c-49ea-9d61-f1d48df4998b-etc-swift\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.793278 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-config-data\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.794094 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-combined-ca-bundle\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.796736 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d9139f2-446c-49ea-9d61-f1d48df4998b-public-tls-certs\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.814307 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9c8\" (UniqueName: \"kubernetes.io/projected/9d9139f2-446c-49ea-9d61-f1d48df4998b-kube-api-access-jn9c8\") pod \"swift-proxy-744f4bf557-dr6ng\" (UID: \"9d9139f2-446c-49ea-9d61-f1d48df4998b\") " pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:41 crc kubenswrapper[4680]: I0126 16:26:41.958056 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:42 crc kubenswrapper[4680]: I0126 16:26:42.057759 4680 generic.go:334] "Generic (PLEG): container finished" podID="9599857d-051f-4a93-8c81-af5e73f5e087" containerID="5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f" exitCode=0 Jan 26 16:26:42 crc kubenswrapper[4680]: I0126 16:26:42.059146 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" event={"ID":"9599857d-051f-4a93-8c81-af5e73f5e087","Type":"ContainerDied","Data":"5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f"} Jan 26 16:26:42 crc kubenswrapper[4680]: I0126 16:26:42.059211 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:42 crc kubenswrapper[4680]: I0126 16:26:42.605628 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-744f4bf557-dr6ng"] Jan 26 16:26:42 crc kubenswrapper[4680]: I0126 16:26:42.957330 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.025247 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54c5db589d-4gv27"] Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.025878 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54c5db589d-4gv27" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-httpd" containerID="cri-o://d12da4a05e1c88c8a27b4602e3fe446a4b1dd425a8c6634e561741b44153fba9" gracePeriod=30 Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.025522 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54c5db589d-4gv27" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-api" containerID="cri-o://9fd845542814645ee71ea840b72cdba1eecdcb406eb3a5368b3daf6dcd17d343" gracePeriod=30 Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.109831 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" event={"ID":"9599857d-051f-4a93-8c81-af5e73f5e087","Type":"ContainerStarted","Data":"46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545"} Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.110844 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.136832 4680 generic.go:334] "Generic (PLEG): container finished" podID="115b3524-df91-4565-9f2f-c345931095f4" containerID="b7cd3827bb25c07fb8f3342d1d61d0b29e5326dad79507debcad2f04c6ea089f" exitCode=137 Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.136953 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerDied","Data":"b7cd3827bb25c07fb8f3342d1d61d0b29e5326dad79507debcad2f04c6ea089f"} Jan 26 16:26:43 crc kubenswrapper[4680]: I0126 16:26:43.154199 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" podStartSLOduration=5.154178711 podStartE2EDuration="5.154178711s" podCreationTimestamp="2026-01-26 16:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:43.133138298 +0000 UTC m=+1278.294410567" watchObservedRunningTime="2026-01-26 16:26:43.154178711 +0000 UTC m=+1278.315450980" Jan 26 16:26:44 crc kubenswrapper[4680]: I0126 16:26:44.164536 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerID="d12da4a05e1c88c8a27b4602e3fe446a4b1dd425a8c6634e561741b44153fba9" exitCode=0 Jan 26 16:26:44 crc kubenswrapper[4680]: I0126 16:26:44.165732 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54c5db589d-4gv27" event={"ID":"7ec969a4-689e-4e82-aedc-9bed8ebe99b2","Type":"ContainerDied","Data":"d12da4a05e1c88c8a27b4602e3fe446a4b1dd425a8c6634e561741b44153fba9"} Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.189096 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744f4bf557-dr6ng" event={"ID":"9d9139f2-446c-49ea-9d61-f1d48df4998b","Type":"ContainerStarted","Data":"533717c6a328525497c033ab5de8d3b90bdd4cdca7eb65eeb338d5c950032e12"} Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.789669 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.870319 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-config-data\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.870741 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-combined-ca-bundle\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.870765 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-sg-core-conf-yaml\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.870820 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-run-httpd\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.870869 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-scripts\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.871529 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.871609 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-log-httpd\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.871987 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.872063 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4fnm\" (UniqueName: \"kubernetes.io/projected/115b3524-df91-4565-9f2f-c345931095f4-kube-api-access-f4fnm\") pod \"115b3524-df91-4565-9f2f-c345931095f4\" (UID: \"115b3524-df91-4565-9f2f-c345931095f4\") " Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.873040 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.873063 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/115b3524-df91-4565-9f2f-c345931095f4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.882985 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-scripts" (OuterVolumeSpecName: "scripts") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.883108 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115b3524-df91-4565-9f2f-c345931095f4-kube-api-access-f4fnm" (OuterVolumeSpecName: "kube-api-access-f4fnm") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "kube-api-access-f4fnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.960509 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.975220 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4fnm\" (UniqueName: \"kubernetes.io/projected/115b3524-df91-4565-9f2f-c345931095f4-kube-api-access-f4fnm\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.975257 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:45 crc kubenswrapper[4680]: I0126 16:26:45.975269 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.049394 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.080220 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.131623 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-config-data" (OuterVolumeSpecName: "config-data") pod "115b3524-df91-4565-9f2f-c345931095f4" (UID: "115b3524-df91-4565-9f2f-c345931095f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.181529 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115b3524-df91-4565-9f2f-c345931095f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.199635 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"115b3524-df91-4565-9f2f-c345931095f4","Type":"ContainerDied","Data":"e1b8d1f40c953f3b8592242dfe887b98e58ecfcf08ed7406a2fad57e9b7388d2"} Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.199693 4680 scope.go:117] "RemoveContainer" containerID="b7cd3827bb25c07fb8f3342d1d61d0b29e5326dad79507debcad2f04c6ea089f" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.199845 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.212259 4680 generic.go:334] "Generic (PLEG): container finished" podID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerID="e75f034b772315c38ada5902c9682b54464ec4bd0d4a023917a6ced3a1564c93" exitCode=137 Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.212429 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerDied","Data":"e75f034b772315c38ada5902c9682b54464ec4bd0d4a023917a6ced3a1564c93"} Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.219461 4680 generic.go:334] "Generic (PLEG): container finished" podID="34651440-00a2-4b50-a6cc-a0230d4def92" containerID="f7c9019de00f5906ef764fd80fe6b9342299dd73c58ad71076ff33557704fd7c" exitCode=137 Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.219524 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerDied","Data":"f7c9019de00f5906ef764fd80fe6b9342299dd73c58ad71076ff33557704fd7c"} Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.255255 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.276974 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.304002 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:26:46 crc kubenswrapper[4680]: E0126 16:26:46.304688 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-notification-agent" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.304702 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-notification-agent" Jan 26 16:26:46 crc kubenswrapper[4680]: E0126 16:26:46.304729 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-central-agent" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.304735 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-central-agent" Jan 26 16:26:46 crc kubenswrapper[4680]: E0126 16:26:46.304761 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="sg-core" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.304771 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="sg-core" Jan 26 16:26:46 crc kubenswrapper[4680]: E0126 16:26:46.304780 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="proxy-httpd" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.304786 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="proxy-httpd" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.305132 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-central-agent" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.305141 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="sg-core" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.305163 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="proxy-httpd" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.305172 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="115b3524-df91-4565-9f2f-c345931095f4" containerName="ceilometer-notification-agent" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.336244 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.339154 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.340228 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.342424 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.385097 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.385606 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwh4\" (UniqueName: \"kubernetes.io/projected/5b754018-4981-4396-bfec-85590035d589-kube-api-access-mbwh4\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.385721 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.385870 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-log-httpd\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.385960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-config-data\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.386053 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-run-httpd\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.386299 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-scripts\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.488252 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-config-data\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.488534 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-run-httpd\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.488697 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-scripts\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.488821 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.488914 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwh4\" (UniqueName: \"kubernetes.io/projected/5b754018-4981-4396-bfec-85590035d589-kube-api-access-mbwh4\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.489016 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.489394 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-log-httpd\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.490393 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-log-httpd\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.494371 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.494645 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-run-httpd\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.494637 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-scripts\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.509434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-config-data\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.510406 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwh4\" (UniqueName: \"kubernetes.io/projected/5b754018-4981-4396-bfec-85590035d589-kube-api-access-mbwh4\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.515923 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.691220 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.716174 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d8d88fb6d-d48d6"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.717477 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.749992 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d8d88fb6d-d48d6"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.778874 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-567897784f-4dflj"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.780060 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.799521 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-combined-ca-bundle\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.799563 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-config-data-custom\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.799649 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6mbn\" (UniqueName: \"kubernetes.io/projected/8321aeb8-a56d-441f-8584-8392dfd855fa-kube-api-access-d6mbn\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.799676 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-config-data\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.820724 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6975d77d6d-c2r4q"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.822456 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.860641 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-567897784f-4dflj"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.868965 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6975d77d6d-c2r4q"] Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.901760 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902006 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jjm\" (UniqueName: \"kubernetes.io/projected/fe3c5a21-465e-45f7-abfd-de0d9343ef40-kube-api-access-p4jjm\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902116 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data-custom\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-combined-ca-bundle\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902285 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6mbn\" (UniqueName: \"kubernetes.io/projected/8321aeb8-a56d-441f-8584-8392dfd855fa-kube-api-access-d6mbn\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902359 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-combined-ca-bundle\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902430 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-config-data\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902546 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902647 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-combined-ca-bundle\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902717 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-config-data-custom\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902803 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjxd\" (UniqueName: \"kubernetes.io/projected/37c3f24a-cb51-4031-adb6-72eaa0605e60-kube-api-access-lzjxd\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.902887 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data-custom\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.909785 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-combined-ca-bundle\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.929765 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6mbn\" (UniqueName: \"kubernetes.io/projected/8321aeb8-a56d-441f-8584-8392dfd855fa-kube-api-access-d6mbn\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.931958 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-config-data\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:46 crc kubenswrapper[4680]: I0126 16:26:46.932763 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8321aeb8-a56d-441f-8584-8392dfd855fa-config-data-custom\") pod \"heat-engine-d8d88fb6d-d48d6\" (UID: \"8321aeb8-a56d-441f-8584-8392dfd855fa\") " pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005666 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjxd\" (UniqueName: \"kubernetes.io/projected/37c3f24a-cb51-4031-adb6-72eaa0605e60-kube-api-access-lzjxd\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data-custom\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005747 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005778 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jjm\" (UniqueName: \"kubernetes.io/projected/fe3c5a21-465e-45f7-abfd-de0d9343ef40-kube-api-access-p4jjm\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005804 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data-custom\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005829 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-combined-ca-bundle\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005858 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-combined-ca-bundle\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.005916 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.012193 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.030584 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.031056 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data-custom\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.036341 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data-custom\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.039451 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-combined-ca-bundle\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.044567 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.054349 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-combined-ca-bundle\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.054885 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jjm\" (UniqueName: \"kubernetes.io/projected/fe3c5a21-465e-45f7-abfd-de0d9343ef40-kube-api-access-p4jjm\") pod \"heat-api-6975d77d6d-c2r4q\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.063528 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjxd\" (UniqueName: \"kubernetes.io/projected/37c3f24a-cb51-4031-adb6-72eaa0605e60-kube-api-access-lzjxd\") pod \"heat-cfnapi-567897784f-4dflj\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.100713 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.144164 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:47 crc kubenswrapper[4680]: I0126 16:26:47.183847 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115b3524-df91-4565-9f2f-c345931095f4" path="/var/lib/kubelet/pods/115b3524-df91-4565-9f2f-c345931095f4/volumes" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.261512 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerID="9fd845542814645ee71ea840b72cdba1eecdcb406eb3a5368b3daf6dcd17d343" exitCode=0 Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.261624 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54c5db589d-4gv27" event={"ID":"7ec969a4-689e-4e82-aedc-9bed8ebe99b2","Type":"ContainerDied","Data":"9fd845542814645ee71ea840b72cdba1eecdcb406eb3a5368b3daf6dcd17d343"} Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.371963 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-85875d88f7-m4tq6"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.401099 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57966556c4-5mgs4"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.423167 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.435690 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5c564bb545-jh4l6"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.438008 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.451638 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.453189 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.546385 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c564bb545-jh4l6"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.575064 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-config-data\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.575348 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-public-tls-certs\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.575458 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-combined-ca-bundle\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.575486 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp2g\" (UniqueName: \"kubernetes.io/projected/cc13c199-5abb-4c89-a1b7-20b39fe83610-kube-api-access-6kp2g\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.575540 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-internal-tls-certs\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.575615 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-config-data-custom\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.595200 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-64ddd8bb8d-7kzsg"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.596654 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.601964 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.602232 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.615050 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64ddd8bb8d-7kzsg"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.644344 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5c8f4c67-xps6r"] Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.644642 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="dnsmasq-dns" containerID="cri-o://9cbf522c3f43bb45ee4706f3d28c25f0227a053cf9defc02abfdf942c88cf230" gracePeriod=10 Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678377 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-combined-ca-bundle\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678440 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp2g\" (UniqueName: \"kubernetes.io/projected/cc13c199-5abb-4c89-a1b7-20b39fe83610-kube-api-access-6kp2g\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678494 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-internal-tls-certs\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678524 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-internal-tls-certs\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678572 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjnn\" (UniqueName: \"kubernetes.io/projected/1095c056-3633-4d3f-a671-3eba54e8bcc9-kube-api-access-khjnn\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-config-data-custom\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-config-data-custom\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678738 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-public-tls-certs\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678810 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-config-data\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678833 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-config-data\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678929 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-public-tls-certs\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.678983 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-combined-ca-bundle\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.691690 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-public-tls-certs\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.693183 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-config-data\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.697979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-internal-tls-certs\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.700278 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-config-data-custom\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.715173 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc13c199-5abb-4c89-a1b7-20b39fe83610-combined-ca-bundle\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.728503 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp2g\" (UniqueName: \"kubernetes.io/projected/cc13c199-5abb-4c89-a1b7-20b39fe83610-kube-api-access-6kp2g\") pod \"heat-api-5c564bb545-jh4l6\" (UID: \"cc13c199-5abb-4c89-a1b7-20b39fe83610\") " pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.781325 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-public-tls-certs\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.781418 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-config-data\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.781509 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-combined-ca-bundle\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.781561 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-internal-tls-certs\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.781602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjnn\" (UniqueName: \"kubernetes.io/projected/1095c056-3633-4d3f-a671-3eba54e8bcc9-kube-api-access-khjnn\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.781636 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-config-data-custom\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.789760 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-config-data\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.791856 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-public-tls-certs\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.792980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-combined-ca-bundle\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.807291 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-internal-tls-certs\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.822932 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjnn\" (UniqueName: \"kubernetes.io/projected/1095c056-3633-4d3f-a671-3eba54e8bcc9-kube-api-access-khjnn\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.823737 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1095c056-3633-4d3f-a671-3eba54e8bcc9-config-data-custom\") pod \"heat-cfnapi-64ddd8bb8d-7kzsg\" (UID: \"1095c056-3633-4d3f-a671-3eba54e8bcc9\") " pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.831029 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:49 crc kubenswrapper[4680]: I0126 16:26:49.925484 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:50 crc kubenswrapper[4680]: I0126 16:26:50.286541 4680 generic.go:334] "Generic (PLEG): container finished" podID="39034751-1073-4ef0-b70e-7553c2d9224c" containerID="9cbf522c3f43bb45ee4706f3d28c25f0227a053cf9defc02abfdf942c88cf230" exitCode=0 Jan 26 16:26:50 crc kubenswrapper[4680]: I0126 16:26:50.286640 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" event={"ID":"39034751-1073-4ef0-b70e-7553c2d9224c","Type":"ContainerDied","Data":"9cbf522c3f43bb45ee4706f3d28c25f0227a053cf9defc02abfdf942c88cf230"} Jan 26 16:26:50 crc kubenswrapper[4680]: I0126 16:26:50.296568 4680 generic.go:334] "Generic (PLEG): container finished" podID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerID="80ee147a3e41f4ff1f3d6f852542f319218168b85000a2a337b336595053684e" exitCode=137 Jan 26 16:26:50 crc kubenswrapper[4680]: I0126 16:26:50.296620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bccd86b4-9a56-4f71-bec5-e4f2ea026725","Type":"ContainerDied","Data":"80ee147a3e41f4ff1f3d6f852542f319218168b85000a2a337b336595053684e"} Jan 26 16:26:52 crc kubenswrapper[4680]: I0126 16:26:52.091984 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": dial tcp 10.217.0.163:8776: connect: connection refused" Jan 26 16:26:52 crc kubenswrapper[4680]: I0126 16:26:52.163556 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Jan 26 16:26:53 crc kubenswrapper[4680]: I0126 16:26:53.077649 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:26:54 crc kubenswrapper[4680]: I0126 16:26:54.742220 4680 scope.go:117] "RemoveContainer" containerID="e7459ff85f6b35a0923958b9cf866d435ab1d881e42ce3620bb4f81c04a7c287" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.265426 4680 scope.go:117] "RemoveContainer" containerID="006f59522ab7a6194fb16f1f4cae39e35cca6dfd2e6889135a34269be78cca8c" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.371865 4680 scope.go:117] "RemoveContainer" containerID="eb4025e40af69801a3d785e222c9c0b7304ec62ad3b85f2d0c6fb67467eed00c" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.412933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" event={"ID":"39034751-1073-4ef0-b70e-7553c2d9224c","Type":"ContainerDied","Data":"8d398699e14facd03dff30bb5672f13b896bf7ff4b92dff70412970f0d1160e6"} Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.412972 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d398699e14facd03dff30bb5672f13b896bf7ff4b92dff70412970f0d1160e6" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.468060 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.515359 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-config\") pod \"39034751-1073-4ef0-b70e-7553c2d9224c\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.515616 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chzpc\" (UniqueName: \"kubernetes.io/projected/39034751-1073-4ef0-b70e-7553c2d9224c-kube-api-access-chzpc\") pod \"39034751-1073-4ef0-b70e-7553c2d9224c\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.515663 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-svc\") pod \"39034751-1073-4ef0-b70e-7553c2d9224c\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.515727 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-swift-storage-0\") pod \"39034751-1073-4ef0-b70e-7553c2d9224c\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.515760 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-nb\") pod \"39034751-1073-4ef0-b70e-7553c2d9224c\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.515788 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-sb\") pod \"39034751-1073-4ef0-b70e-7553c2d9224c\" (UID: \"39034751-1073-4ef0-b70e-7553c2d9224c\") " Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.559650 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39034751-1073-4ef0-b70e-7553c2d9224c-kube-api-access-chzpc" (OuterVolumeSpecName: "kube-api-access-chzpc") pod "39034751-1073-4ef0-b70e-7553c2d9224c" (UID: "39034751-1073-4ef0-b70e-7553c2d9224c"). InnerVolumeSpecName "kube-api-access-chzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.618777 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chzpc\" (UniqueName: \"kubernetes.io/projected/39034751-1073-4ef0-b70e-7553c2d9224c-kube-api-access-chzpc\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.818940 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-567897784f-4dflj"] Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.836969 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d8d88fb6d-d48d6"] Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.929184 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39034751-1073-4ef0-b70e-7553c2d9224c" (UID: "39034751-1073-4ef0-b70e-7553c2d9224c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.929366 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39034751-1073-4ef0-b70e-7553c2d9224c" (UID: "39034751-1073-4ef0-b70e-7553c2d9224c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.940676 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.940702 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:55 crc kubenswrapper[4680]: I0126 16:26:55.946391 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39034751-1073-4ef0-b70e-7553c2d9224c" (UID: "39034751-1073-4ef0-b70e-7553c2d9224c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.046377 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.063429 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39034751-1073-4ef0-b70e-7553c2d9224c" (UID: "39034751-1073-4ef0-b70e-7553c2d9224c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.089141 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-config" (OuterVolumeSpecName: "config") pod "39034751-1073-4ef0-b70e-7553c2d9224c" (UID: "39034751-1073-4ef0-b70e-7553c2d9224c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.116203 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64ddd8bb8d-7kzsg"] Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.151536 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.151571 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39034751-1073-4ef0-b70e-7553c2d9224c-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.276699 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.316599 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.326014 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394578 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-combined-ca-bundle\") pod \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394617 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-httpd-config\") pod \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394690 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-ovndb-tls-certs\") pod \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394715 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-combined-ca-bundle\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394823 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25867\" (UniqueName: \"kubernetes.io/projected/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-kube-api-access-25867\") pod \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394881 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394917 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn8xp\" (UniqueName: \"kubernetes.io/projected/bccd86b4-9a56-4f71-bec5-e4f2ea026725-kube-api-access-tn8xp\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394935 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccd86b4-9a56-4f71-bec5-e4f2ea026725-logs\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.394983 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-config\") pod \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\" (UID: \"7ec969a4-689e-4e82-aedc-9bed8ebe99b2\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.395005 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-scripts\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.395046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bccd86b4-9a56-4f71-bec5-e4f2ea026725-etc-machine-id\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.395089 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data-custom\") pod \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\" (UID: \"bccd86b4-9a56-4f71-bec5-e4f2ea026725\") " Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.543557 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccd86b4-9a56-4f71-bec5-e4f2ea026725-logs" (OuterVolumeSpecName: "logs") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.543640 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bccd86b4-9a56-4f71-bec5-e4f2ea026725-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.588341 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7ec969a4-689e-4e82-aedc-9bed8ebe99b2" (UID: "7ec969a4-689e-4e82-aedc-9bed8ebe99b2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.593887 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccd86b4-9a56-4f71-bec5-e4f2ea026725-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.593997 4680 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bccd86b4-9a56-4f71-bec5-e4f2ea026725-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.599305 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-scripts" (OuterVolumeSpecName: "scripts") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.600576 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccd86b4-9a56-4f71-bec5-e4f2ea026725-kube-api-access-tn8xp" (OuterVolumeSpecName: "kube-api-access-tn8xp") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "kube-api-access-tn8xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.645938 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerStarted","Data":"c69624d2bcc285b657d662c97a87069e6ddd188655dd4cffa20769a64bbb9a15"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.657485 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744f4bf557-dr6ng" event={"ID":"9d9139f2-446c-49ea-9d61-f1d48df4998b","Type":"ContainerStarted","Data":"cc220873bc33938ed369f36d7c2fc930365a7123052864d2c7e0413cb85682af"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.666214 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c564bb545-jh4l6" event={"ID":"cc13c199-5abb-4c89-a1b7-20b39fe83610","Type":"ContainerStarted","Data":"078ea7eadc3cfc9080646a1cefeb48f5e1dfdfd34bf4954752c319f8bf37e0c6"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.677718 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerStarted","Data":"2e3d9a2c4fb777ce77e8ac71a8785cd5db8554e671ae5b738937d90cbd63f6b4"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.680357 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567897784f-4dflj" event={"ID":"37c3f24a-cb51-4031-adb6-72eaa0605e60","Type":"ContainerStarted","Data":"b275fff03875358e1c7c80212dc18ee78085837bea342e7edd3282919beb21cd"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.685188 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-kube-api-access-25867" (OuterVolumeSpecName: "kube-api-access-25867") pod "7ec969a4-689e-4e82-aedc-9bed8ebe99b2" (UID: "7ec969a4-689e-4e82-aedc-9bed8ebe99b2"). InnerVolumeSpecName "kube-api-access-25867". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.702177 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54c5db589d-4gv27" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.702897 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54c5db589d-4gv27" event={"ID":"7ec969a4-689e-4e82-aedc-9bed8ebe99b2","Type":"ContainerDied","Data":"431345e6cc451e6eb65b1eae8b4f59ab1e44261d16ba6346b7f7e2f605036693"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.702949 4680 scope.go:117] "RemoveContainer" containerID="d12da4a05e1c88c8a27b4602e3fe446a4b1dd425a8c6634e561741b44153fba9" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.703358 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25867\" (UniqueName: \"kubernetes.io/projected/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-kube-api-access-25867\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.703397 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn8xp\" (UniqueName: \"kubernetes.io/projected/bccd86b4-9a56-4f71-bec5-e4f2ea026725-kube-api-access-tn8xp\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.703406 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.703417 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.717883 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c564bb545-jh4l6"] Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.723688 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.724036 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerStarted","Data":"d742cdda8a8dc8549e8d05b56d22ab900385caaaf38f775d5c30bb7cab6cfbcd"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.736789 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d8d88fb6d-d48d6" event={"ID":"8321aeb8-a56d-441f-8584-8392dfd855fa","Type":"ContainerStarted","Data":"28edd1479b2394d12c69bace7e1c00e4c9ac0e6afc0727aeadaaf8a2e933a73f"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.742278 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bccd86b4-9a56-4f71-bec5-e4f2ea026725","Type":"ContainerDied","Data":"1c50fa0390484dfcd01d740511ff768efece57361e28f7f0a50debfedd41f29e"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.742537 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.749185 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6975d77d6d-c2r4q"] Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.749784 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5c8f4c67-xps6r" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.750245 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" event={"ID":"1095c056-3633-4d3f-a671-3eba54e8bcc9","Type":"ContainerStarted","Data":"2f156ba51c717a6792d79ad56748e86aa867623ed08964f0d4b7c9b0cf13ec21"} Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.834960 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.922254 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:56 crc kubenswrapper[4680]: I0126 16:26:56.941454 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.282473 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data" (OuterVolumeSpecName: "config-data") pod "bccd86b4-9a56-4f71-bec5-e4f2ea026725" (UID: "bccd86b4-9a56-4f71-bec5-e4f2ea026725"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.304296 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-config" (OuterVolumeSpecName: "config") pod "7ec969a4-689e-4e82-aedc-9bed8ebe99b2" (UID: "7ec969a4-689e-4e82-aedc-9bed8ebe99b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.353142 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ec969a4-689e-4e82-aedc-9bed8ebe99b2" (UID: "7ec969a4-689e-4e82-aedc-9bed8ebe99b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.354941 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccd86b4-9a56-4f71-bec5-e4f2ea026725-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.354965 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.354975 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.370438 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7ec969a4-689e-4e82-aedc-9bed8ebe99b2" (UID: "7ec969a4-689e-4e82-aedc-9bed8ebe99b2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.456478 4680 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec969a4-689e-4e82-aedc-9bed8ebe99b2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.489139 4680 scope.go:117] "RemoveContainer" containerID="9fd845542814645ee71ea840b72cdba1eecdcb406eb3a5368b3daf6dcd17d343" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.557892 4680 scope.go:117] "RemoveContainer" containerID="80ee147a3e41f4ff1f3d6f852542f319218168b85000a2a337b336595053684e" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.596161 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5c8f4c67-xps6r"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.608328 4680 scope.go:117] "RemoveContainer" containerID="86df9d3177fa84ec5d10a5b21b0c73670e969101a19b7ddfbb61e9fc32cde5c5" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.615144 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c5c8f4c67-xps6r"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.635210 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.646344 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713138 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:57 crc kubenswrapper[4680]: E0126 16:26:57.713560 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713574 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api" Jan 26 16:26:57 crc kubenswrapper[4680]: E0126 16:26:57.713593 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-httpd" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713600 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-httpd" Jan 26 16:26:57 crc kubenswrapper[4680]: E0126 16:26:57.713613 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="dnsmasq-dns" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713619 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="dnsmasq-dns" Jan 26 16:26:57 crc kubenswrapper[4680]: E0126 16:26:57.713634 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api-log" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713640 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api-log" Jan 26 16:26:57 crc kubenswrapper[4680]: E0126 16:26:57.713657 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="init" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713663 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="init" Jan 26 16:26:57 crc kubenswrapper[4680]: E0126 16:26:57.713685 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-api" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713693 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-api" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713865 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-api" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713885 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713898 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" containerName="neutron-httpd" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713909 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" containerName="cinder-api-log" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.713921 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" containerName="dnsmasq-dns" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.714863 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.718870 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.718891 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.719050 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.725854 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.740219 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54c5db589d-4gv27"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.770346 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54c5db589d-4gv27"] Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.775517 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567897784f-4dflj" event={"ID":"37c3f24a-cb51-4031-adb6-72eaa0605e60","Type":"ContainerStarted","Data":"68c9521200f5f646f1eb2f0d1243136760e2019080b4b816c5ae7e2e20943210"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.776856 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786187 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786244 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qth\" (UniqueName: \"kubernetes.io/projected/ae2d7241-4b0c-45d9-be7a-0ecccf218107-kube-api-access-78qth\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786273 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786293 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2d7241-4b0c-45d9-be7a-0ecccf218107-logs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786328 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-config-data\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786361 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786434 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae2d7241-4b0c-45d9-be7a-0ecccf218107-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786486 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.786518 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-scripts\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.789639 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85875d88f7-m4tq6" event={"ID":"8f0cd221-e6ba-4921-9c2b-49e6424cd321","Type":"ContainerStarted","Data":"e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.789763 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-85875d88f7-m4tq6" podUID="8f0cd221-e6ba-4921-9c2b-49e6424cd321" containerName="heat-api" containerID="cri-o://e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a" gracePeriod=60 Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.789810 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.797438 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744f4bf557-dr6ng" event={"ID":"9d9139f2-446c-49ea-9d61-f1d48df4998b","Type":"ContainerStarted","Data":"3b70b46eda19e88e4b64f1c9e73f3e8773ed4e8ef77a71248e6e755625735afb"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.797733 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.798412 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.804400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d8d88fb6d-d48d6" event={"ID":"8321aeb8-a56d-441f-8584-8392dfd855fa","Type":"ContainerStarted","Data":"6599f74aef2c58dab9010385d33847abda4db9182c14d012c77220a8286811ad"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.804537 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.813847 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57966556c4-5mgs4" event={"ID":"65bc9e91-e18f-4da0-a068-1a2f5199068f","Type":"ContainerStarted","Data":"576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.813973 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-57966556c4-5mgs4" podUID="65bc9e91-e18f-4da0-a068-1a2f5199068f" containerName="heat-cfnapi" containerID="cri-o://576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811" gracePeriod=60 Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.814177 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.815659 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-567897784f-4dflj" podStartSLOduration=11.815632129 podStartE2EDuration="11.815632129s" podCreationTimestamp="2026-01-26 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:57.811866513 +0000 UTC m=+1292.973138782" watchObservedRunningTime="2026-01-26 16:26:57.815632129 +0000 UTC m=+1292.976904398" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.820538 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"87934669-654f-41be-ae1f-4805fe0619f1","Type":"ContainerStarted","Data":"e61e90f9a5188326b949413a2d3b921a13409810e3bfc883dfdd346d33a84dce"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.825994 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6975d77d6d-c2r4q" event={"ID":"fe3c5a21-465e-45f7-abfd-de0d9343ef40","Type":"ContainerStarted","Data":"a17c8c2dff898d07336e39e904c3abc863a311879c7ebda745e0c8e57904769a"} Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.849348 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d8d88fb6d-d48d6" podStartSLOduration=11.849323359 podStartE2EDuration="11.849323359s" podCreationTimestamp="2026-01-26 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:57.848456655 +0000 UTC m=+1293.009728924" watchObservedRunningTime="2026-01-26 16:26:57.849323359 +0000 UTC m=+1293.010595628" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.891565 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892445 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-scripts\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892551 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892600 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78qth\" (UniqueName: \"kubernetes.io/projected/ae2d7241-4b0c-45d9-be7a-0ecccf218107-kube-api-access-78qth\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892633 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892661 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2d7241-4b0c-45d9-be7a-0ecccf218107-logs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-config-data\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892744 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.892851 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae2d7241-4b0c-45d9-be7a-0ecccf218107-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.896114 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.897997 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2d7241-4b0c-45d9-be7a-0ecccf218107-logs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.898718 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae2d7241-4b0c-45d9-be7a-0ecccf218107-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.899459 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-scripts\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.903733 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.926705 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.929441 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-744f4bf557-dr6ng" podStartSLOduration=16.929422178 podStartE2EDuration="16.929422178s" podCreationTimestamp="2026-01-26 16:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:57.884985535 +0000 UTC m=+1293.046257804" watchObservedRunningTime="2026-01-26 16:26:57.929422178 +0000 UTC m=+1293.090694447" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.929822 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-config-data\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.931606 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d7241-4b0c-45d9-be7a-0ecccf218107-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.935416 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78qth\" (UniqueName: \"kubernetes.io/projected/ae2d7241-4b0c-45d9-be7a-0ecccf218107-kube-api-access-78qth\") pod \"cinder-api-0\" (UID: \"ae2d7241-4b0c-45d9-be7a-0ecccf218107\") " pod="openstack/cinder-api-0" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.944960 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-85875d88f7-m4tq6" podStartSLOduration=5.78287842 podStartE2EDuration="19.944886734s" podCreationTimestamp="2026-01-26 16:26:38 +0000 UTC" firstStartedPulling="2026-01-26 16:26:40.559875021 +0000 UTC m=+1275.721147290" lastFinishedPulling="2026-01-26 16:26:54.721883335 +0000 UTC m=+1289.883155604" observedRunningTime="2026-01-26 16:26:57.916652518 +0000 UTC m=+1293.077924787" watchObservedRunningTime="2026-01-26 16:26:57.944886734 +0000 UTC m=+1293.106159003" Jan 26 16:26:57 crc kubenswrapper[4680]: I0126 16:26:57.953170 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.26415342 podStartE2EDuration="23.953153527s" podCreationTimestamp="2026-01-26 16:26:34 +0000 UTC" firstStartedPulling="2026-01-26 16:26:35.586635314 +0000 UTC m=+1270.747907583" lastFinishedPulling="2026-01-26 16:26:55.275635421 +0000 UTC m=+1290.436907690" observedRunningTime="2026-01-26 16:26:57.95039427 +0000 UTC m=+1293.111666539" watchObservedRunningTime="2026-01-26 16:26:57.953153527 +0000 UTC m=+1293.114425796" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.004884 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57966556c4-5mgs4" podStartSLOduration=5.617817926 podStartE2EDuration="20.004864556s" podCreationTimestamp="2026-01-26 16:26:38 +0000 UTC" firstStartedPulling="2026-01-26 16:26:40.354975703 +0000 UTC m=+1275.516247972" lastFinishedPulling="2026-01-26 16:26:54.742022343 +0000 UTC m=+1289.903294602" observedRunningTime="2026-01-26 16:26:57.984510132 +0000 UTC m=+1293.145782401" watchObservedRunningTime="2026-01-26 16:26:58.004864556 +0000 UTC m=+1293.166136825" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.068548 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.635598 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.846609 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerStarted","Data":"fc586f0f70a31f215b8cf1ae433186876cc3684a810fbdb57a4904438a0415bc"} Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.850343 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae2d7241-4b0c-45d9-be7a-0ecccf218107","Type":"ContainerStarted","Data":"21c5db424ac18a9909be4c174259041b05092f1fa97e05d7f95ec3ebc4760c35"} Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.851486 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6975d77d6d-c2r4q" event={"ID":"fe3c5a21-465e-45f7-abfd-de0d9343ef40","Type":"ContainerStarted","Data":"6261e6b1ad66baf8a58887061ba691f3b5896e9347804cfb8adfb6cf0bb4670e"} Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.852551 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.864781 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c564bb545-jh4l6" event={"ID":"cc13c199-5abb-4c89-a1b7-20b39fe83610","Type":"ContainerStarted","Data":"95237403ffe35bd96f1c966a2bd89afb599757227ddad1821777e7f274fced00"} Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.864847 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.869256 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" event={"ID":"1095c056-3633-4d3f-a671-3eba54e8bcc9","Type":"ContainerStarted","Data":"5bddecaff0e27d7874cb0a3e46d42b50efdea5c096e2fc0e85c3f14fb1101add"} Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.869947 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.878251 4680 generic.go:334] "Generic (PLEG): container finished" podID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerID="68c9521200f5f646f1eb2f0d1243136760e2019080b4b816c5ae7e2e20943210" exitCode=1 Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.878515 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567897784f-4dflj" event={"ID":"37c3f24a-cb51-4031-adb6-72eaa0605e60","Type":"ContainerDied","Data":"68c9521200f5f646f1eb2f0d1243136760e2019080b4b816c5ae7e2e20943210"} Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.879267 4680 scope.go:117] "RemoveContainer" containerID="68c9521200f5f646f1eb2f0d1243136760e2019080b4b816c5ae7e2e20943210" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.883603 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6975d77d6d-c2r4q" podStartSLOduration=12.883586946 podStartE2EDuration="12.883586946s" podCreationTimestamp="2026-01-26 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:58.873511792 +0000 UTC m=+1294.034784061" watchObservedRunningTime="2026-01-26 16:26:58.883586946 +0000 UTC m=+1294.044859215" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.931983 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" podStartSLOduration=9.93196433 podStartE2EDuration="9.93196433s" podCreationTimestamp="2026-01-26 16:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:58.897464028 +0000 UTC m=+1294.058736297" watchObservedRunningTime="2026-01-26 16:26:58.93196433 +0000 UTC m=+1294.093236599" Jan 26 16:26:58 crc kubenswrapper[4680]: I0126 16:26:58.963676 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5c564bb545-jh4l6" podStartSLOduration=9.963650864 podStartE2EDuration="9.963650864s" podCreationTimestamp="2026-01-26 16:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:26:58.927218127 +0000 UTC m=+1294.088490396" watchObservedRunningTime="2026-01-26 16:26:58.963650864 +0000 UTC m=+1294.124923143" Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.221375 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39034751-1073-4ef0-b70e-7553c2d9224c" path="/var/lib/kubelet/pods/39034751-1073-4ef0-b70e-7553c2d9224c/volumes" Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.222127 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec969a4-689e-4e82-aedc-9bed8ebe99b2" path="/var/lib/kubelet/pods/7ec969a4-689e-4e82-aedc-9bed8ebe99b2/volumes" Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.222897 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccd86b4-9a56-4f71-bec5-e4f2ea026725" path="/var/lib/kubelet/pods/bccd86b4-9a56-4f71-bec5-e4f2ea026725/volumes" Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.565561 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.576139 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.888955 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerStarted","Data":"460c82b90bd60de32fd8775e79fc33e830df4b4ecf21a672c427f76734a4802d"} Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.890543 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567897784f-4dflj" event={"ID":"37c3f24a-cb51-4031-adb6-72eaa0605e60","Type":"ContainerStarted","Data":"34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92"} Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.891631 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.897595 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae2d7241-4b0c-45d9-be7a-0ecccf218107","Type":"ContainerStarted","Data":"0f2595192fae9ed14a6e7be66a84798900056804563726976dd0743f2965a101"} Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.899688 4680 generic.go:334] "Generic (PLEG): container finished" podID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerID="6261e6b1ad66baf8a58887061ba691f3b5896e9347804cfb8adfb6cf0bb4670e" exitCode=1 Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.899721 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6975d77d6d-c2r4q" event={"ID":"fe3c5a21-465e-45f7-abfd-de0d9343ef40","Type":"ContainerDied","Data":"6261e6b1ad66baf8a58887061ba691f3b5896e9347804cfb8adfb6cf0bb4670e"} Jan 26 16:26:59 crc kubenswrapper[4680]: I0126 16:26:59.900316 4680 scope.go:117] "RemoveContainer" containerID="6261e6b1ad66baf8a58887061ba691f3b5896e9347804cfb8adfb6cf0bb4670e" Jan 26 16:27:00 crc kubenswrapper[4680]: I0126 16:27:00.928197 4680 generic.go:334] "Generic (PLEG): container finished" podID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerID="34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92" exitCode=1 Jan 26 16:27:00 crc kubenswrapper[4680]: I0126 16:27:00.929423 4680 scope.go:117] "RemoveContainer" containerID="34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92" Jan 26 16:27:00 crc kubenswrapper[4680]: E0126 16:27:00.929717 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-567897784f-4dflj_openstack(37c3f24a-cb51-4031-adb6-72eaa0605e60)\"" pod="openstack/heat-cfnapi-567897784f-4dflj" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" Jan 26 16:27:00 crc kubenswrapper[4680]: I0126 16:27:00.929984 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567897784f-4dflj" event={"ID":"37c3f24a-cb51-4031-adb6-72eaa0605e60","Type":"ContainerDied","Data":"34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92"} Jan 26 16:27:00 crc kubenswrapper[4680]: I0126 16:27:00.930024 4680 scope.go:117] "RemoveContainer" containerID="68c9521200f5f646f1eb2f0d1243136760e2019080b4b816c5ae7e2e20943210" Jan 26 16:27:00 crc kubenswrapper[4680]: I0126 16:27:00.934876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6975d77d6d-c2r4q" event={"ID":"fe3c5a21-465e-45f7-abfd-de0d9343ef40","Type":"ContainerStarted","Data":"a15c53777db68050bfa478280277fa0b4039052117ba978116360b83eccf78b2"} Jan 26 16:27:00 crc kubenswrapper[4680]: I0126 16:27:00.935890 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:27:01 crc kubenswrapper[4680]: I0126 16:27:01.012588 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerStarted","Data":"c37644a8d6ac0f1189331fb7df2c2809b07bb6216b0a22d272fa191a11bc4885"} Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.034461 4680 generic.go:334] "Generic (PLEG): container finished" podID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerID="a15c53777db68050bfa478280277fa0b4039052117ba978116360b83eccf78b2" exitCode=1 Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.034833 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6975d77d6d-c2r4q" event={"ID":"fe3c5a21-465e-45f7-abfd-de0d9343ef40","Type":"ContainerDied","Data":"a15c53777db68050bfa478280277fa0b4039052117ba978116360b83eccf78b2"} Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.034879 4680 scope.go:117] "RemoveContainer" containerID="6261e6b1ad66baf8a58887061ba691f3b5896e9347804cfb8adfb6cf0bb4670e" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.035551 4680 scope.go:117] "RemoveContainer" containerID="a15c53777db68050bfa478280277fa0b4039052117ba978116360b83eccf78b2" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.046111 4680 scope.go:117] "RemoveContainer" containerID="34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.057608 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae2d7241-4b0c-45d9-be7a-0ecccf218107","Type":"ContainerStarted","Data":"04380c7ce460ee32ebe036cd973c024e33334c696da3f015726d00b7961066c7"} Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.059117 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.102782 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.116582 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.116564217 podStartE2EDuration="5.116564217s" podCreationTimestamp="2026-01-26 16:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:02.109849607 +0000 UTC m=+1297.271121876" watchObservedRunningTime="2026-01-26 16:27:02.116564217 +0000 UTC m=+1297.277836476" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.144831 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.340839 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:27:02 crc kubenswrapper[4680]: E0126 16:27:02.351283 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6975d77d6d-c2r4q_openstack(fe3c5a21-465e-45f7-abfd-de0d9343ef40)\"" pod="openstack/heat-api-6975d77d6d-c2r4q" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" Jan 26 16:27:02 crc kubenswrapper[4680]: E0126 16:27:02.351628 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-567897784f-4dflj_openstack(37c3f24a-cb51-4031-adb6-72eaa0605e60)\"" pod="openstack/heat-cfnapi-567897784f-4dflj" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" Jan 26 16:27:02 crc kubenswrapper[4680]: I0126 16:27:02.500416 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-744f4bf557-dr6ng" Jan 26 16:27:03 crc kubenswrapper[4680]: I0126 16:27:03.074332 4680 scope.go:117] "RemoveContainer" containerID="a15c53777db68050bfa478280277fa0b4039052117ba978116360b83eccf78b2" Jan 26 16:27:03 crc kubenswrapper[4680]: E0126 16:27:03.074845 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6975d77d6d-c2r4q_openstack(fe3c5a21-465e-45f7-abfd-de0d9343ef40)\"" pod="openstack/heat-api-6975d77d6d-c2r4q" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" Jan 26 16:27:03 crc kubenswrapper[4680]: I0126 16:27:03.075504 4680 scope.go:117] "RemoveContainer" containerID="34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92" Jan 26 16:27:03 crc kubenswrapper[4680]: E0126 16:27:03.075667 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-567897784f-4dflj_openstack(37c3f24a-cb51-4031-adb6-72eaa0605e60)\"" pod="openstack/heat-cfnapi-567897784f-4dflj" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.113688 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.114311 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.115083 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerStarted","Data":"793bd47224c27badd31a5f37ace5c81d943439d33b05b3f2d9eb63b8fb8defa8"} Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.115301 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-central-agent" containerID="cri-o://fc586f0f70a31f215b8cf1ae433186876cc3684a810fbdb57a4904438a0415bc" gracePeriod=30 Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.115706 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.116030 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="sg-core" containerID="cri-o://c37644a8d6ac0f1189331fb7df2c2809b07bb6216b0a22d272fa191a11bc4885" gracePeriod=30 Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.116158 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="proxy-httpd" containerID="cri-o://793bd47224c27badd31a5f37ace5c81d943439d33b05b3f2d9eb63b8fb8defa8" gracePeriod=30 Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.116243 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-notification-agent" containerID="cri-o://460c82b90bd60de32fd8775e79fc33e830df4b4ecf21a672c427f76734a4802d" gracePeriod=30 Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.116518 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.162311 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=11.574608962 podStartE2EDuration="19.162294577s" podCreationTimestamp="2026-01-26 16:26:46 +0000 UTC" firstStartedPulling="2026-01-26 16:26:56.194218944 +0000 UTC m=+1291.355491213" lastFinishedPulling="2026-01-26 16:27:03.781904559 +0000 UTC m=+1298.943176828" observedRunningTime="2026-01-26 16:27:05.161254658 +0000 UTC m=+1300.322526917" watchObservedRunningTime="2026-01-26 16:27:05.162294577 +0000 UTC m=+1300.323566846" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.343648 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.343931 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:27:05 crc kubenswrapper[4680]: I0126 16:27:05.345439 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:27:06 crc kubenswrapper[4680]: I0126 16:27:06.127400 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b754018-4981-4396-bfec-85590035d589" containerID="793bd47224c27badd31a5f37ace5c81d943439d33b05b3f2d9eb63b8fb8defa8" exitCode=0 Jan 26 16:27:06 crc kubenswrapper[4680]: I0126 16:27:06.127440 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b754018-4981-4396-bfec-85590035d589" containerID="c37644a8d6ac0f1189331fb7df2c2809b07bb6216b0a22d272fa191a11bc4885" exitCode=2 Jan 26 16:27:06 crc kubenswrapper[4680]: I0126 16:27:06.127621 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerDied","Data":"793bd47224c27badd31a5f37ace5c81d943439d33b05b3f2d9eb63b8fb8defa8"} Jan 26 16:27:06 crc kubenswrapper[4680]: I0126 16:27:06.127694 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerDied","Data":"c37644a8d6ac0f1189331fb7df2c2809b07bb6216b0a22d272fa191a11bc4885"} Jan 26 16:27:07 crc kubenswrapper[4680]: I0126 16:27:07.122280 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d8d88fb6d-d48d6" Jan 26 16:27:07 crc kubenswrapper[4680]: I0126 16:27:07.153108 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b754018-4981-4396-bfec-85590035d589" containerID="460c82b90bd60de32fd8775e79fc33e830df4b4ecf21a672c427f76734a4802d" exitCode=0 Jan 26 16:27:07 crc kubenswrapper[4680]: I0126 16:27:07.153147 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerDied","Data":"460c82b90bd60de32fd8775e79fc33e830df4b4ecf21a672c427f76734a4802d"} Jan 26 16:27:07 crc kubenswrapper[4680]: I0126 16:27:07.268891 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5478f87d6c-x9mh9"] Jan 26 16:27:07 crc kubenswrapper[4680]: I0126 16:27:07.276230 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5478f87d6c-x9mh9" podUID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" containerName="heat-engine" containerID="cri-o://5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d" gracePeriod=60 Jan 26 16:27:07 crc kubenswrapper[4680]: I0126 16:27:07.566174 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:27:08 crc kubenswrapper[4680]: I0126 16:27:08.216742 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:27:08 crc kubenswrapper[4680]: I0126 16:27:08.608602 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-64ddd8bb8d-7kzsg" Jan 26 16:27:08 crc kubenswrapper[4680]: I0126 16:27:08.680409 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-567897784f-4dflj"] Jan 26 16:27:09 crc kubenswrapper[4680]: E0126 16:27:09.095010 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:27:09 crc kubenswrapper[4680]: E0126 16:27:09.099185 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:27:09 crc kubenswrapper[4680]: E0126 16:27:09.106197 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:27:09 crc kubenswrapper[4680]: E0126 16:27:09.106267 4680 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5478f87d6c-x9mh9" podUID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" containerName="heat-engine" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.221218 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567897784f-4dflj" event={"ID":"37c3f24a-cb51-4031-adb6-72eaa0605e60","Type":"ContainerDied","Data":"b275fff03875358e1c7c80212dc18ee78085837bea342e7edd3282919beb21cd"} Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.221822 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b275fff03875358e1c7c80212dc18ee78085837bea342e7edd3282919beb21cd" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.257265 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.388297 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data\") pod \"37c3f24a-cb51-4031-adb6-72eaa0605e60\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.388388 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzjxd\" (UniqueName: \"kubernetes.io/projected/37c3f24a-cb51-4031-adb6-72eaa0605e60-kube-api-access-lzjxd\") pod \"37c3f24a-cb51-4031-adb6-72eaa0605e60\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.388450 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-combined-ca-bundle\") pod \"37c3f24a-cb51-4031-adb6-72eaa0605e60\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.388592 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data-custom\") pod \"37c3f24a-cb51-4031-adb6-72eaa0605e60\" (UID: \"37c3f24a-cb51-4031-adb6-72eaa0605e60\") " Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.418264 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37c3f24a-cb51-4031-adb6-72eaa0605e60" (UID: "37c3f24a-cb51-4031-adb6-72eaa0605e60"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.418511 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c3f24a-cb51-4031-adb6-72eaa0605e60-kube-api-access-lzjxd" (OuterVolumeSpecName: "kube-api-access-lzjxd") pod "37c3f24a-cb51-4031-adb6-72eaa0605e60" (UID: "37c3f24a-cb51-4031-adb6-72eaa0605e60"). InnerVolumeSpecName "kube-api-access-lzjxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.462209 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37c3f24a-cb51-4031-adb6-72eaa0605e60" (UID: "37c3f24a-cb51-4031-adb6-72eaa0605e60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.469404 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data" (OuterVolumeSpecName: "config-data") pod "37c3f24a-cb51-4031-adb6-72eaa0605e60" (UID: "37c3f24a-cb51-4031-adb6-72eaa0605e60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.491604 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.491662 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.491677 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzjxd\" (UniqueName: \"kubernetes.io/projected/37c3f24a-cb51-4031-adb6-72eaa0605e60-kube-api-access-lzjxd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.491689 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c3f24a-cb51-4031-adb6-72eaa0605e60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.743464 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5c564bb545-jh4l6" Jan 26 16:27:09 crc kubenswrapper[4680]: I0126 16:27:09.871984 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6975d77d6d-c2r4q"] Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.225382 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567897784f-4dflj" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.395145 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-567897784f-4dflj"] Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.423679 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-567897784f-4dflj"] Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.583525 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.737836 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-combined-ca-bundle\") pod \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.738317 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4jjm\" (UniqueName: \"kubernetes.io/projected/fe3c5a21-465e-45f7-abfd-de0d9343ef40-kube-api-access-p4jjm\") pod \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.738421 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data-custom\") pod \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.738475 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data\") pod \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\" (UID: \"fe3c5a21-465e-45f7-abfd-de0d9343ef40\") " Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.749305 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3c5a21-465e-45f7-abfd-de0d9343ef40-kube-api-access-p4jjm" (OuterVolumeSpecName: "kube-api-access-p4jjm") pod "fe3c5a21-465e-45f7-abfd-de0d9343ef40" (UID: "fe3c5a21-465e-45f7-abfd-de0d9343ef40"). InnerVolumeSpecName "kube-api-access-p4jjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.791213 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe3c5a21-465e-45f7-abfd-de0d9343ef40" (UID: "fe3c5a21-465e-45f7-abfd-de0d9343ef40"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.807322 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe3c5a21-465e-45f7-abfd-de0d9343ef40" (UID: "fe3c5a21-465e-45f7-abfd-de0d9343ef40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.842832 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4jjm\" (UniqueName: \"kubernetes.io/projected/fe3c5a21-465e-45f7-abfd-de0d9343ef40-kube-api-access-p4jjm\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.842871 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.842883 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.860202 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data" (OuterVolumeSpecName: "config-data") pod "fe3c5a21-465e-45f7-abfd-de0d9343ef40" (UID: "fe3c5a21-465e-45f7-abfd-de0d9343ef40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4680]: I0126 16:27:10.944260 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3c5a21-465e-45f7-abfd-de0d9343ef40-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:11 crc kubenswrapper[4680]: I0126 16:27:11.179632 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" path="/var/lib/kubelet/pods/37c3f24a-cb51-4031-adb6-72eaa0605e60/volumes" Jan 26 16:27:11 crc kubenswrapper[4680]: I0126 16:27:11.236952 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6975d77d6d-c2r4q" event={"ID":"fe3c5a21-465e-45f7-abfd-de0d9343ef40","Type":"ContainerDied","Data":"a17c8c2dff898d07336e39e904c3abc863a311879c7ebda745e0c8e57904769a"} Jan 26 16:27:11 crc kubenswrapper[4680]: I0126 16:27:11.237016 4680 scope.go:117] "RemoveContainer" containerID="a15c53777db68050bfa478280277fa0b4039052117ba978116360b83eccf78b2" Jan 26 16:27:11 crc kubenswrapper[4680]: I0126 16:27:11.237081 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6975d77d6d-c2r4q" Jan 26 16:27:11 crc kubenswrapper[4680]: I0126 16:27:11.278012 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6975d77d6d-c2r4q"] Jan 26 16:27:11 crc kubenswrapper[4680]: I0126 16:27:11.289923 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6975d77d6d-c2r4q"] Jan 26 16:27:12 crc kubenswrapper[4680]: I0126 16:27:12.081301 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="ae2d7241-4b0c-45d9-be7a-0ecccf218107" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.181:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:27:13 crc kubenswrapper[4680]: I0126 16:27:13.073295 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ae2d7241-4b0c-45d9-be7a-0ecccf218107" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.181:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:27:13 crc kubenswrapper[4680]: I0126 16:27:13.184679 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" path="/var/lib/kubelet/pods/fe3c5a21-465e-45f7-abfd-de0d9343ef40/volumes" Jan 26 16:27:15 crc kubenswrapper[4680]: I0126 16:27:15.113914 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:27:15 crc kubenswrapper[4680]: I0126 16:27:15.343525 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:27:16 crc kubenswrapper[4680]: I0126 16:27:16.277816 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b754018-4981-4396-bfec-85590035d589" containerID="fc586f0f70a31f215b8cf1ae433186876cc3684a810fbdb57a4904438a0415bc" exitCode=0 Jan 26 16:27:16 crc kubenswrapper[4680]: I0126 16:27:16.278039 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerDied","Data":"fc586f0f70a31f215b8cf1ae433186876cc3684a810fbdb57a4904438a0415bc"} Jan 26 16:27:16 crc kubenswrapper[4680]: I0126 16:27:16.280426 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" containerID="5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d" exitCode=0 Jan 26 16:27:16 crc kubenswrapper[4680]: I0126 16:27:16.280458 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5478f87d6c-x9mh9" event={"ID":"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79","Type":"ContainerDied","Data":"5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d"} Jan 26 16:27:16 crc kubenswrapper[4680]: I0126 16:27:16.693863 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.175:3000/\": dial tcp 10.217.0.175:3000: connect: connection refused" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.038555 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.051683 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.096250 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="ae2d7241-4b0c-45d9-be7a-0ecccf218107" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.181:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.134847 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165223 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6whrt\" (UniqueName: \"kubernetes.io/projected/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-kube-api-access-6whrt\") pod \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165258 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-sg-core-conf-yaml\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165286 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-combined-ca-bundle\") pod \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165307 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data-custom\") pod \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165334 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbwh4\" (UniqueName: \"kubernetes.io/projected/5b754018-4981-4396-bfec-85590035d589-kube-api-access-mbwh4\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165360 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-log-httpd\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165402 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data\") pod \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\" (UID: \"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165518 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-scripts\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165545 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-combined-ca-bundle\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165586 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-run-httpd\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.165623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-config-data\") pod \"5b754018-4981-4396-bfec-85590035d589\" (UID: \"5b754018-4981-4396-bfec-85590035d589\") " Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.169839 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.171238 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.194800 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" (UID: "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.198028 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-scripts" (OuterVolumeSpecName: "scripts") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.198152 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b754018-4981-4396-bfec-85590035d589-kube-api-access-mbwh4" (OuterVolumeSpecName: "kube-api-access-mbwh4") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "kube-api-access-mbwh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.245404 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-kube-api-access-6whrt" (OuterVolumeSpecName: "kube-api-access-6whrt") pod "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" (UID: "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79"). InnerVolumeSpecName "kube-api-access-6whrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.267479 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.267519 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbwh4\" (UniqueName: \"kubernetes.io/projected/5b754018-4981-4396-bfec-85590035d589-kube-api-access-mbwh4\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.267533 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.267546 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.267557 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b754018-4981-4396-bfec-85590035d589-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.267565 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6whrt\" (UniqueName: \"kubernetes.io/projected/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-kube-api-access-6whrt\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.314198 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" (UID: "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.331262 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.338625 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b754018-4981-4396-bfec-85590035d589","Type":"ContainerDied","Data":"2e3d9a2c4fb777ce77e8ac71a8785cd5db8554e671ae5b738937d90cbd63f6b4"} Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.338692 4680 scope.go:117] "RemoveContainer" containerID="793bd47224c27badd31a5f37ace5c81d943439d33b05b3f2d9eb63b8fb8defa8" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.374129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5478f87d6c-x9mh9" event={"ID":"3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79","Type":"ContainerDied","Data":"840677c813be7e91d795b3579f4a112cceeecb58adc37af98822588bcf46d39f"} Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.374478 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5478f87d6c-x9mh9" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.386043 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.429342 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.432293 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data" (OuterVolumeSpecName: "config-data") pod "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" (UID: "3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.477356 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.497836 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.497884 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.497896 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.714775 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5478f87d6c-x9mh9"] Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.722272 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5478f87d6c-x9mh9"] Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.740928 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-config-data" (OuterVolumeSpecName: "config-data") pod "5b754018-4981-4396-bfec-85590035d589" (UID: "5b754018-4981-4396-bfec-85590035d589"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.758541 4680 scope.go:117] "RemoveContainer" containerID="c37644a8d6ac0f1189331fb7df2c2809b07bb6216b0a22d272fa191a11bc4885" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.809133 4680 scope.go:117] "RemoveContainer" containerID="460c82b90bd60de32fd8775e79fc33e830df4b4ecf21a672c427f76734a4802d" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.811215 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b754018-4981-4396-bfec-85590035d589-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:17 crc kubenswrapper[4680]: E0126 16:27:17.818509 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ebf7926_1f6f_4f6e_a57d_e4cb3a1b1a79.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.895442 4680 scope.go:117] "RemoveContainer" containerID="fc586f0f70a31f215b8cf1ae433186876cc3684a810fbdb57a4904438a0415bc" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.962358 4680 scope.go:117] "RemoveContainer" containerID="5574f8a449d0d089bd310bb9c3030adf7d6af4be65fcb4120d7d1af843c70f7d" Jan 26 16:27:17 crc kubenswrapper[4680]: I0126 16:27:17.986921 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.024689 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052136 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052557 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerName="heat-cfnapi" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052571 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerName="heat-cfnapi" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052590 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="proxy-httpd" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052596 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="proxy-httpd" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052604 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerName="heat-api" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052609 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerName="heat-api" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052617 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-central-agent" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052622 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-central-agent" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052631 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="sg-core" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052638 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="sg-core" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052653 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-notification-agent" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052659 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-notification-agent" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052670 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" containerName="heat-engine" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052677 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" containerName="heat-engine" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052686 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerName="heat-cfnapi" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052692 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerName="heat-cfnapi" Jan 26 16:27:18 crc kubenswrapper[4680]: E0126 16:27:18.052702 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerName="heat-api" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052707 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerName="heat-api" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052867 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="proxy-httpd" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052882 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerName="heat-api" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052889 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" containerName="heat-engine" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052899 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerName="heat-cfnapi" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052908 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c3f24a-cb51-4031-adb6-72eaa0605e60" containerName="heat-cfnapi" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052920 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-notification-agent" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052930 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="sg-core" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.052940 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b754018-4981-4396-bfec-85590035d589" containerName="ceilometer-central-agent" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.053256 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3c5a21-465e-45f7-abfd-de0d9343ef40" containerName="heat-api" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.054500 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.060428 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.060596 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.083825 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127155 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127250 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-log-httpd\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127275 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68c8t\" (UniqueName: \"kubernetes.io/projected/446997f9-01cd-42d6-abc3-d6413ca79ba5-kube-api-access-68c8t\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127326 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-scripts\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127347 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127387 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-run-httpd\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.127468 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-config-data\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229375 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-log-httpd\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229419 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68c8t\" (UniqueName: \"kubernetes.io/projected/446997f9-01cd-42d6-abc3-d6413ca79ba5-kube-api-access-68c8t\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-scripts\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229483 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229516 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-run-httpd\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229592 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-config-data\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.229623 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.230534 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-log-httpd\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.230615 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-run-httpd\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.234469 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.234622 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-scripts\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.251101 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-config-data\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.259707 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.263813 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68c8t\" (UniqueName: \"kubernetes.io/projected/446997f9-01cd-42d6-abc3-d6413ca79ba5-kube-api-access-68c8t\") pod \"ceilometer-0\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " pod="openstack/ceilometer-0" Jan 26 16:27:18 crc kubenswrapper[4680]: I0126 16:27:18.372047 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:19 crc kubenswrapper[4680]: I0126 16:27:19.179547 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79" path="/var/lib/kubelet/pods/3ebf7926-1f6f-4f6e-a57d-e4cb3a1b1a79/volumes" Jan 26 16:27:19 crc kubenswrapper[4680]: I0126 16:27:19.180329 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b754018-4981-4396-bfec-85590035d589" path="/var/lib/kubelet/pods/5b754018-4981-4396-bfec-85590035d589/volumes" Jan 26 16:27:19 crc kubenswrapper[4680]: I0126 16:27:19.193040 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:19 crc kubenswrapper[4680]: I0126 16:27:19.394822 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerStarted","Data":"3d00b1673c5c5e381d06bfe649bcd45faa4acdcbe328dd17de6d9be746acd75e"} Jan 26 16:27:21 crc kubenswrapper[4680]: I0126 16:27:21.067259 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:21 crc kubenswrapper[4680]: I0126 16:27:21.431221 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerStarted","Data":"2d59a5e4bbe56fe9763049fcbefe12a707dc9f87c7ebfdbbaba4ff75bf628156"} Jan 26 16:27:22 crc kubenswrapper[4680]: I0126 16:27:22.441784 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerStarted","Data":"0287777a9d10f356c3462ccefb3da05382adf5dc1a30152bfa2461d416b9ecbd"} Jan 26 16:27:23 crc kubenswrapper[4680]: I0126 16:27:23.003758 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:27:23 crc kubenswrapper[4680]: I0126 16:27:23.003992 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-log" containerID="cri-o://e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10" gracePeriod=30 Jan 26 16:27:23 crc kubenswrapper[4680]: I0126 16:27:23.012032 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-httpd" containerID="cri-o://2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e" gracePeriod=30 Jan 26 16:27:23 crc kubenswrapper[4680]: I0126 16:27:23.451309 4680 generic.go:334] "Generic (PLEG): container finished" podID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerID="e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10" exitCode=143 Jan 26 16:27:23 crc kubenswrapper[4680]: I0126 16:27:23.451354 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e","Type":"ContainerDied","Data":"e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10"} Jan 26 16:27:24 crc kubenswrapper[4680]: I0126 16:27:24.460400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerStarted","Data":"1991794023b8e95fc597b15d7c66dfc0a2d45cfc0e488e096db7bf4c78e04fa0"} Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.113459 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.113846 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.114695 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d742cdda8a8dc8549e8d05b56d22ab900385caaaf38f775d5c30bb7cab6cfbcd"} pod="openstack/horizon-c44b75754-m2rxl" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.114807 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" containerID="cri-o://d742cdda8a8dc8549e8d05b56d22ab900385caaaf38f775d5c30bb7cab6cfbcd" gracePeriod=30 Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.343974 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.344083 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.344804 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c69624d2bcc285b657d662c97a87069e6ddd188655dd4cffa20769a64bbb9a15"} pod="openstack/horizon-8657f7848d-ls2sv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 16:27:25 crc kubenswrapper[4680]: I0126 16:27:25.344840 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" containerID="cri-o://c69624d2bcc285b657d662c97a87069e6ddd188655dd4cffa20769a64bbb9a15" gracePeriod=30 Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.143017 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.312381 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-httpd-run\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.312747 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-scripts\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.312766 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-internal-tls-certs\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.312797 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-logs\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.313042 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-combined-ca-bundle\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.313110 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b2xp\" (UniqueName: \"kubernetes.io/projected/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-kube-api-access-9b2xp\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.313115 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.313152 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.313206 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-config-data\") pod \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\" (UID: \"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e\") " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.313817 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.316805 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-logs" (OuterVolumeSpecName: "logs") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.326276 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-kube-api-access-9b2xp" (OuterVolumeSpecName: "kube-api-access-9b2xp") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "kube-api-access-9b2xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.331423 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-scripts" (OuterVolumeSpecName: "scripts") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.355438 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.375241 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.398628 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.415385 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.415416 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b2xp\" (UniqueName: \"kubernetes.io/projected/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-kube-api-access-9b2xp\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.415454 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.415467 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.415479 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.415492 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.449130 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-config-data" (OuterVolumeSpecName: "config-data") pod "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" (UID: "b544d0c4-7eb7-4ccf-9a40-cc6d4192613e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.450854 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.488957 4680 generic.go:334] "Generic (PLEG): container finished" podID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerID="2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e" exitCode=0 Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.489016 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e","Type":"ContainerDied","Data":"2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e"} Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.489044 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b544d0c4-7eb7-4ccf-9a40-cc6d4192613e","Type":"ContainerDied","Data":"ec6181946ca041de069d11f8b19fafc14b9be37191cb040ec1caa89d8cabdcef"} Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.489062 4680 scope.go:117] "RemoveContainer" containerID="2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.489211 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.492342 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerStarted","Data":"c0271566dff295f48ae35768f81fd04ee286264ae4200c88a7fcf87767b505cb"} Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.492466 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-central-agent" containerID="cri-o://2d59a5e4bbe56fe9763049fcbefe12a707dc9f87c7ebfdbbaba4ff75bf628156" gracePeriod=30 Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.492679 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.492723 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="proxy-httpd" containerID="cri-o://c0271566dff295f48ae35768f81fd04ee286264ae4200c88a7fcf87767b505cb" gracePeriod=30 Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.492762 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="sg-core" containerID="cri-o://1991794023b8e95fc597b15d7c66dfc0a2d45cfc0e488e096db7bf4c78e04fa0" gracePeriod=30 Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.492793 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-notification-agent" containerID="cri-o://0287777a9d10f356c3462ccefb3da05382adf5dc1a30152bfa2461d416b9ecbd" gracePeriod=30 Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.530350 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.530387 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.575959 4680 scope.go:117] "RemoveContainer" containerID="e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.602500 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.566624305 podStartE2EDuration="10.602472568s" podCreationTimestamp="2026-01-26 16:27:17 +0000 UTC" firstStartedPulling="2026-01-26 16:27:19.199478641 +0000 UTC m=+1314.360750910" lastFinishedPulling="2026-01-26 16:27:26.235326884 +0000 UTC m=+1321.396599173" observedRunningTime="2026-01-26 16:27:27.573745888 +0000 UTC m=+1322.735018177" watchObservedRunningTime="2026-01-26 16:27:27.602472568 +0000 UTC m=+1322.763744837" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.609365 4680 scope.go:117] "RemoveContainer" containerID="2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e" Jan 26 16:27:27 crc kubenswrapper[4680]: E0126 16:27:27.612298 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e\": container with ID starting with 2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e not found: ID does not exist" containerID="2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.612334 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e"} err="failed to get container status \"2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e\": rpc error: code = NotFound desc = could not find container \"2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e\": container with ID starting with 2d9274da789a588f5dab6534b35494e2889c4c3760d15c7956c2d20383ebb26e not found: ID does not exist" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.612360 4680 scope.go:117] "RemoveContainer" containerID="e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10" Jan 26 16:27:27 crc kubenswrapper[4680]: E0126 16:27:27.612706 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10\": container with ID starting with e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10 not found: ID does not exist" containerID="e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.612742 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10"} err="failed to get container status \"e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10\": rpc error: code = NotFound desc = could not find container \"e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10\": container with ID starting with e3dfdfbb2859323651f776c2478de1b76ada8378f84b3fa0a53e8571ccfeac10 not found: ID does not exist" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.625238 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.654171 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.655824 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:27:27 crc kubenswrapper[4680]: E0126 16:27:27.656323 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-httpd" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.656338 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-httpd" Jan 26 16:27:27 crc kubenswrapper[4680]: E0126 16:27:27.656369 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-log" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.656377 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-log" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.656599 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-httpd" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.656618 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" containerName="glance-log" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.657828 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.662530 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.681509 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.697818 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737326 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54qp\" (UniqueName: \"kubernetes.io/projected/b3e829e8-924e-452d-9f58-268813fa6d7e-kube-api-access-v54qp\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737436 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737506 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737537 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737572 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737596 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3e829e8-924e-452d-9f58-268813fa6d7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737620 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e829e8-924e-452d-9f58-268813fa6d7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.737658 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840039 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840285 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840459 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3e829e8-924e-452d-9f58-268813fa6d7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840609 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e829e8-924e-452d-9f58-268813fa6d7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840695 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840823 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54qp\" (UniqueName: \"kubernetes.io/projected/b3e829e8-924e-452d-9f58-268813fa6d7e-kube-api-access-v54qp\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.840944 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.841900 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3e829e8-924e-452d-9f58-268813fa6d7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.842309 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.843472 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e829e8-924e-452d-9f58-268813fa6d7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.847586 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.847960 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.849680 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.851719 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e829e8-924e-452d-9f58-268813fa6d7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.869806 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54qp\" (UniqueName: \"kubernetes.io/projected/b3e829e8-924e-452d-9f58-268813fa6d7e-kube-api-access-v54qp\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:27 crc kubenswrapper[4680]: I0126 16:27:27.878912 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3e829e8-924e-452d-9f58-268813fa6d7e\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.068561 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.503196 4680 generic.go:334] "Generic (PLEG): container finished" podID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerID="c0271566dff295f48ae35768f81fd04ee286264ae4200c88a7fcf87767b505cb" exitCode=0 Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.503482 4680 generic.go:334] "Generic (PLEG): container finished" podID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerID="1991794023b8e95fc597b15d7c66dfc0a2d45cfc0e488e096db7bf4c78e04fa0" exitCode=2 Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.503491 4680 generic.go:334] "Generic (PLEG): container finished" podID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerID="0287777a9d10f356c3462ccefb3da05382adf5dc1a30152bfa2461d416b9ecbd" exitCode=0 Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.503235 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerDied","Data":"c0271566dff295f48ae35768f81fd04ee286264ae4200c88a7fcf87767b505cb"} Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.503522 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerDied","Data":"1991794023b8e95fc597b15d7c66dfc0a2d45cfc0e488e096db7bf4c78e04fa0"} Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.503536 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerDied","Data":"0287777a9d10f356c3462ccefb3da05382adf5dc1a30152bfa2461d416b9ecbd"} Jan 26 16:27:28 crc kubenswrapper[4680]: I0126 16:27:28.805555 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:27:28 crc kubenswrapper[4680]: W0126 16:27:28.824043 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e829e8_924e_452d_9f58_268813fa6d7e.slice/crio-08d74dfda719c25595cfecf3c00882509d71ba459f535a875b0492fc8a518e10 WatchSource:0}: Error finding container 08d74dfda719c25595cfecf3c00882509d71ba459f535a875b0492fc8a518e10: Status 404 returned error can't find the container with id 08d74dfda719c25595cfecf3c00882509d71ba459f535a875b0492fc8a518e10 Jan 26 16:27:29 crc kubenswrapper[4680]: I0126 16:27:29.215134 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b544d0c4-7eb7-4ccf-9a40-cc6d4192613e" path="/var/lib/kubelet/pods/b544d0c4-7eb7-4ccf-9a40-cc6d4192613e/volumes" Jan 26 16:27:29 crc kubenswrapper[4680]: I0126 16:27:29.512313 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3e829e8-924e-452d-9f58-268813fa6d7e","Type":"ContainerStarted","Data":"08d74dfda719c25595cfecf3c00882509d71ba459f535a875b0492fc8a518e10"} Jan 26 16:27:30 crc kubenswrapper[4680]: I0126 16:27:30.520886 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3e829e8-924e-452d-9f58-268813fa6d7e","Type":"ContainerStarted","Data":"0909aab72c6ec96e59a8ccdecbd3a6ad9d6f3273d7800ce5e8b9e9114f40c056"} Jan 26 16:27:31 crc kubenswrapper[4680]: I0126 16:27:31.530772 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3e829e8-924e-452d-9f58-268813fa6d7e","Type":"ContainerStarted","Data":"4f67fd563b1d94e99dbc5eeb190ac711212615295935e67afd87c82cc3f6aa20"} Jan 26 16:27:31 crc kubenswrapper[4680]: I0126 16:27:31.557733 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.557714017 podStartE2EDuration="4.557714017s" podCreationTimestamp="2026-01-26 16:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:31.553050535 +0000 UTC m=+1326.714322804" watchObservedRunningTime="2026-01-26 16:27:31.557714017 +0000 UTC m=+1326.718986286" Jan 26 16:27:32 crc kubenswrapper[4680]: I0126 16:27:32.229929 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:27:32 crc kubenswrapper[4680]: I0126 16:27:32.230180 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-log" containerID="cri-o://909532695ee2e23844ea8d0405787851f2c4418a489c5b9b6e5d76cb7c64c93b" gracePeriod=30 Jan 26 16:27:32 crc kubenswrapper[4680]: I0126 16:27:32.230272 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-httpd" containerID="cri-o://4c5df3f4d582331e88910e42cd22b166ac2e56fad3c6fb3540e82aa5d641ed98" gracePeriod=30 Jan 26 16:27:32 crc kubenswrapper[4680]: I0126 16:27:32.539561 4680 generic.go:334] "Generic (PLEG): container finished" podID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerID="909532695ee2e23844ea8d0405787851f2c4418a489c5b9b6e5d76cb7c64c93b" exitCode=143 Jan 26 16:27:32 crc kubenswrapper[4680]: I0126 16:27:32.539649 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59c7199c-14bd-4851-9059-e677cad6f9c2","Type":"ContainerDied","Data":"909532695ee2e23844ea8d0405787851f2c4418a489c5b9b6e5d76cb7c64c93b"} Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.558946 4680 generic.go:334] "Generic (PLEG): container finished" podID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerID="2d59a5e4bbe56fe9763049fcbefe12a707dc9f87c7ebfdbbaba4ff75bf628156" exitCode=0 Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.559032 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerDied","Data":"2d59a5e4bbe56fe9763049fcbefe12a707dc9f87c7ebfdbbaba4ff75bf628156"} Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.559598 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446997f9-01cd-42d6-abc3-d6413ca79ba5","Type":"ContainerDied","Data":"3d00b1673c5c5e381d06bfe649bcd45faa4acdcbe328dd17de6d9be746acd75e"} Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.559614 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d00b1673c5c5e381d06bfe649bcd45faa4acdcbe328dd17de6d9be746acd75e" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.624420 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798144 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-sg-core-conf-yaml\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798221 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68c8t\" (UniqueName: \"kubernetes.io/projected/446997f9-01cd-42d6-abc3-d6413ca79ba5-kube-api-access-68c8t\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798257 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-config-data\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798311 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-log-httpd\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798344 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-scripts\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798416 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-run-httpd\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.798529 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-combined-ca-bundle\") pod \"446997f9-01cd-42d6-abc3-d6413ca79ba5\" (UID: \"446997f9-01cd-42d6-abc3-d6413ca79ba5\") " Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.799276 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.799305 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.841575 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.847411 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446997f9-01cd-42d6-abc3-d6413ca79ba5-kube-api-access-68c8t" (OuterVolumeSpecName: "kube-api-access-68c8t") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "kube-api-access-68c8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.856192 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-scripts" (OuterVolumeSpecName: "scripts") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.900938 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.900967 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68c8t\" (UniqueName: \"kubernetes.io/projected/446997f9-01cd-42d6-abc3-d6413ca79ba5-kube-api-access-68c8t\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.900978 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.900986 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.900995 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446997f9-01cd-42d6-abc3-d6413ca79ba5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:34 crc kubenswrapper[4680]: I0126 16:27:34.949594 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.002223 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.036215 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-config-data" (OuterVolumeSpecName: "config-data") pod "446997f9-01cd-42d6-abc3-d6413ca79ba5" (UID: "446997f9-01cd-42d6-abc3-d6413ca79ba5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.103430 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446997f9-01cd-42d6-abc3-d6413ca79ba5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.569227 4680 generic.go:334] "Generic (PLEG): container finished" podID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerID="4c5df3f4d582331e88910e42cd22b166ac2e56fad3c6fb3540e82aa5d641ed98" exitCode=0 Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.569310 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59c7199c-14bd-4851-9059-e677cad6f9c2","Type":"ContainerDied","Data":"4c5df3f4d582331e88910e42cd22b166ac2e56fad3c6fb3540e82aa5d641ed98"} Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.569631 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.591216 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.598594 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.615482 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:35 crc kubenswrapper[4680]: E0126 16:27:35.615976 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-central-agent" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.615999 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-central-agent" Jan 26 16:27:35 crc kubenswrapper[4680]: E0126 16:27:35.616019 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="proxy-httpd" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616028 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="proxy-httpd" Jan 26 16:27:35 crc kubenswrapper[4680]: E0126 16:27:35.616058 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="sg-core" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616082 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="sg-core" Jan 26 16:27:35 crc kubenswrapper[4680]: E0126 16:27:35.616098 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-notification-agent" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616106 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-notification-agent" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616325 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="proxy-httpd" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616346 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-notification-agent" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616365 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="ceilometer-central-agent" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.616379 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" containerName="sg-core" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.618361 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.620365 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.623122 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.629047 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.714888 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-run-httpd\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.714944 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-config-data\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.715009 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-log-httpd\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.715138 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.715161 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-scripts\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.715177 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.715203 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqk8\" (UniqueName: \"kubernetes.io/projected/3ef76123-affe-4881-a16b-d070391cafee-kube-api-access-hqqk8\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816579 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqk8\" (UniqueName: \"kubernetes.io/projected/3ef76123-affe-4881-a16b-d070391cafee-kube-api-access-hqqk8\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816666 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-run-httpd\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816693 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-config-data\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816750 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-log-httpd\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816801 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816822 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-scripts\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.816844 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.817181 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-run-httpd\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.817264 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-log-httpd\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.822759 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.822929 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-config-data\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.823439 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.825016 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-scripts\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.840137 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqk8\" (UniqueName: \"kubernetes.io/projected/3ef76123-affe-4881-a16b-d070391cafee-kube-api-access-hqqk8\") pod \"ceilometer-0\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " pod="openstack/ceilometer-0" Jan 26 16:27:35 crc kubenswrapper[4680]: I0126 16:27:35.937689 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.598224 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.709172 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.852870 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853151 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxb8\" (UniqueName: \"kubernetes.io/projected/59c7199c-14bd-4851-9059-e677cad6f9c2-kube-api-access-rfxb8\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853180 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-public-tls-certs\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853208 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-scripts\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853233 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-combined-ca-bundle\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853260 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-httpd-run\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853317 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-logs\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.853390 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-config-data\") pod \"59c7199c-14bd-4851-9059-e677cad6f9c2\" (UID: \"59c7199c-14bd-4851-9059-e677cad6f9c2\") " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.854218 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.854248 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-logs" (OuterVolumeSpecName: "logs") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.861549 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.874642 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-scripts" (OuterVolumeSpecName: "scripts") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.881153 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c7199c-14bd-4851-9059-e677cad6f9c2-kube-api-access-rfxb8" (OuterVolumeSpecName: "kube-api-access-rfxb8") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "kube-api-access-rfxb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.910412 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-config-data" (OuterVolumeSpecName: "config-data") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.922660 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.946938 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "59c7199c-14bd-4851-9059-e677cad6f9c2" (UID: "59c7199c-14bd-4851-9059-e677cad6f9c2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956374 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956491 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxb8\" (UniqueName: \"kubernetes.io/projected/59c7199c-14bd-4851-9059-e677cad6f9c2-kube-api-access-rfxb8\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956585 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956651 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956719 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956788 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956856 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59c7199c-14bd-4851-9059-e677cad6f9c2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:36 crc kubenswrapper[4680]: I0126 16:27:36.956909 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c7199c-14bd-4851-9059-e677cad6f9c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.004809 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.059193 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.180747 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446997f9-01cd-42d6-abc3-d6413ca79ba5" path="/var/lib/kubelet/pods/446997f9-01cd-42d6-abc3-d6413ca79ba5/volumes" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.590115 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerStarted","Data":"f98400de191fcbf28265ab9daf2da6c1f032c1dde13f183171077d3218ddc52d"} Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.592840 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59c7199c-14bd-4851-9059-e677cad6f9c2","Type":"ContainerDied","Data":"297d5a8e98c80f29bc6f1fc528f93918b5c98fdf20aa43c87e9f60343d65af22"} Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.592901 4680 scope.go:117] "RemoveContainer" containerID="4c5df3f4d582331e88910e42cd22b166ac2e56fad3c6fb3540e82aa5d641ed98" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.593140 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.615889 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.625514 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.628855 4680 scope.go:117] "RemoveContainer" containerID="909532695ee2e23844ea8d0405787851f2c4418a489c5b9b6e5d76cb7c64c93b" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.653428 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:27:37 crc kubenswrapper[4680]: E0126 16:27:37.653862 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-httpd" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.653874 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-httpd" Jan 26 16:27:37 crc kubenswrapper[4680]: E0126 16:27:37.653883 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-log" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.653892 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-log" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.654088 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-httpd" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.654103 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" containerName="glance-log" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.655043 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.658145 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.665490 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.666692 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.771508 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-logs\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.771759 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8ffb\" (UniqueName: \"kubernetes.io/projected/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-kube-api-access-t8ffb\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.771863 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.771946 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.772045 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.772152 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.772231 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.772345 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.873867 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.873923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.873944 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874010 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874079 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-logs\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874107 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8ffb\" (UniqueName: \"kubernetes.io/projected/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-kube-api-access-t8ffb\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874125 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874150 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874510 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.874741 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-logs\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.875231 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.881287 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.882732 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.893975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.894724 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.897595 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8ffb\" (UniqueName: \"kubernetes.io/projected/0a50cc0b-1f21-4d47-bf4b-bad9016579bb-kube-api-access-t8ffb\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:37 crc kubenswrapper[4680]: I0126 16:27:37.912897 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"0a50cc0b-1f21-4d47-bf4b-bad9016579bb\") " pod="openstack/glance-default-external-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.033439 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.069118 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.069177 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.111368 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.130341 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.602140 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerStarted","Data":"c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb"} Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.602694 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.602709 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:38 crc kubenswrapper[4680]: I0126 16:27:38.616402 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:27:38 crc kubenswrapper[4680]: W0126 16:27:38.622625 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a50cc0b_1f21_4d47_bf4b_bad9016579bb.slice/crio-41c6f424329d8eb98a63170f78e4a93d2083193f0c72220feabd9210f24f46a3 WatchSource:0}: Error finding container 41c6f424329d8eb98a63170f78e4a93d2083193f0c72220feabd9210f24f46a3: Status 404 returned error can't find the container with id 41c6f424329d8eb98a63170f78e4a93d2083193f0c72220feabd9210f24f46a3 Jan 26 16:27:39 crc kubenswrapper[4680]: I0126 16:27:39.181974 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c7199c-14bd-4851-9059-e677cad6f9c2" path="/var/lib/kubelet/pods/59c7199c-14bd-4851-9059-e677cad6f9c2/volumes" Jan 26 16:27:39 crc kubenswrapper[4680]: I0126 16:27:39.618300 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a50cc0b-1f21-4d47-bf4b-bad9016579bb","Type":"ContainerStarted","Data":"a919eccaeab02e1cb18c5a7c37835737bc0817362904f6e4ccf492a5860c3d94"} Jan 26 16:27:39 crc kubenswrapper[4680]: I0126 16:27:39.618432 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a50cc0b-1f21-4d47-bf4b-bad9016579bb","Type":"ContainerStarted","Data":"41c6f424329d8eb98a63170f78e4a93d2083193f0c72220feabd9210f24f46a3"} Jan 26 16:27:41 crc kubenswrapper[4680]: I0126 16:27:41.637265 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerStarted","Data":"02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34"} Jan 26 16:27:41 crc kubenswrapper[4680]: I0126 16:27:41.639560 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a50cc0b-1f21-4d47-bf4b-bad9016579bb","Type":"ContainerStarted","Data":"f59884a83948dd09c3dd06ad6a121623e8331980dfe8015adcf941400f976e5c"} Jan 26 16:27:41 crc kubenswrapper[4680]: I0126 16:27:41.679173 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.679150785 podStartE2EDuration="4.679150785s" podCreationTimestamp="2026-01-26 16:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:41.677044785 +0000 UTC m=+1336.838317074" watchObservedRunningTime="2026-01-26 16:27:41.679150785 +0000 UTC m=+1336.840423054" Jan 26 16:27:41 crc kubenswrapper[4680]: I0126 16:27:41.934767 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:42 crc kubenswrapper[4680]: I0126 16:27:42.290221 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:42 crc kubenswrapper[4680]: I0126 16:27:42.290324 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:27:42 crc kubenswrapper[4680]: I0126 16:27:42.292533 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:27:43 crc kubenswrapper[4680]: I0126 16:27:43.659148 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerStarted","Data":"7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1"} Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.033687 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.034300 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.085892 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.103093 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.718704 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerStarted","Data":"eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef"} Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.719320 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.719340 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.719352 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.718965 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="sg-core" containerID="cri-o://7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1" gracePeriod=30 Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.718920 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="proxy-httpd" containerID="cri-o://eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef" gracePeriod=30 Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.718902 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-central-agent" containerID="cri-o://c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb" gracePeriod=30 Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.718953 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-notification-agent" containerID="cri-o://02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34" gracePeriod=30 Jan 26 16:27:48 crc kubenswrapper[4680]: I0126 16:27:48.746536 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.158279755 podStartE2EDuration="13.746520227s" podCreationTimestamp="2026-01-26 16:27:35 +0000 UTC" firstStartedPulling="2026-01-26 16:27:36.642950992 +0000 UTC m=+1331.804223261" lastFinishedPulling="2026-01-26 16:27:48.231191464 +0000 UTC m=+1343.392463733" observedRunningTime="2026-01-26 16:27:48.737938115 +0000 UTC m=+1343.899210384" watchObservedRunningTime="2026-01-26 16:27:48.746520227 +0000 UTC m=+1343.907792496" Jan 26 16:27:49 crc kubenswrapper[4680]: I0126 16:27:49.735231 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ef76123-affe-4881-a16b-d070391cafee" containerID="eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef" exitCode=0 Jan 26 16:27:49 crc kubenswrapper[4680]: I0126 16:27:49.736644 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ef76123-affe-4881-a16b-d070391cafee" containerID="7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1" exitCode=2 Jan 26 16:27:49 crc kubenswrapper[4680]: I0126 16:27:49.736726 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ef76123-affe-4881-a16b-d070391cafee" containerID="02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34" exitCode=0 Jan 26 16:27:49 crc kubenswrapper[4680]: I0126 16:27:49.735264 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerDied","Data":"eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef"} Jan 26 16:27:49 crc kubenswrapper[4680]: I0126 16:27:49.736906 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerDied","Data":"7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1"} Jan 26 16:27:49 crc kubenswrapper[4680]: I0126 16:27:49.736948 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerDied","Data":"02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34"} Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.824904 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-nrvwr"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.831761 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.855558 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-66ac-account-create-update-8f7pw"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.857777 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.865000 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.873970 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjw9\" (UniqueName: \"kubernetes.io/projected/f4656d6b-2378-4862-a99c-95a0836df0a4-kube-api-access-hjjw9\") pod \"nova-api-db-create-nrvwr\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.874104 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4656d6b-2378-4862-a99c-95a0836df0a4-operator-scripts\") pod \"nova-api-db-create-nrvwr\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.874258 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nrvwr"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.883328 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-66ac-account-create-update-8f7pw"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.896885 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8mgm4"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.898008 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.913508 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8mgm4"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.978228 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f853c1f2-58ca-4001-828d-4fc087046a68-operator-scripts\") pod \"nova-api-66ac-account-create-update-8f7pw\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.978300 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xql\" (UniqueName: \"kubernetes.io/projected/f853c1f2-58ca-4001-828d-4fc087046a68-kube-api-access-z2xql\") pod \"nova-api-66ac-account-create-update-8f7pw\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.978356 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4656d6b-2378-4862-a99c-95a0836df0a4-operator-scripts\") pod \"nova-api-db-create-nrvwr\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.978428 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e76e0-e28f-49df-a6ea-d786696f02ff-operator-scripts\") pod \"nova-cell0-db-create-8mgm4\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.978462 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczcq\" (UniqueName: \"kubernetes.io/projected/769e76e0-e28f-49df-a6ea-d786696f02ff-kube-api-access-gczcq\") pod \"nova-cell0-db-create-8mgm4\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.978481 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjw9\" (UniqueName: \"kubernetes.io/projected/f4656d6b-2378-4862-a99c-95a0836df0a4-kube-api-access-hjjw9\") pod \"nova-api-db-create-nrvwr\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.979465 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4656d6b-2378-4862-a99c-95a0836df0a4-operator-scripts\") pod \"nova-api-db-create-nrvwr\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.979915 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-shljw"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.981039 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.991888 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1697-account-create-update-wrqjs"] Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.993204 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:50 crc kubenswrapper[4680]: I0126 16:27:50.997537 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.014468 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1697-account-create-update-wrqjs"] Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.036698 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-shljw"] Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.044851 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjw9\" (UniqueName: \"kubernetes.io/projected/f4656d6b-2378-4862-a99c-95a0836df0a4-kube-api-access-hjjw9\") pod \"nova-api-db-create-nrvwr\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082045 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f853c1f2-58ca-4001-828d-4fc087046a68-operator-scripts\") pod \"nova-api-66ac-account-create-update-8f7pw\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082136 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xql\" (UniqueName: \"kubernetes.io/projected/f853c1f2-58ca-4001-828d-4fc087046a68-kube-api-access-z2xql\") pod \"nova-api-66ac-account-create-update-8f7pw\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082203 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d598b28f-3dbb-402d-a506-c4a4e19557b2-operator-scripts\") pod \"nova-cell0-1697-account-create-update-wrqjs\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082234 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24llr\" (UniqueName: \"kubernetes.io/projected/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-kube-api-access-24llr\") pod \"nova-cell1-db-create-shljw\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e76e0-e28f-49df-a6ea-d786696f02ff-operator-scripts\") pod \"nova-cell0-db-create-8mgm4\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082300 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczcq\" (UniqueName: \"kubernetes.io/projected/769e76e0-e28f-49df-a6ea-d786696f02ff-kube-api-access-gczcq\") pod \"nova-cell0-db-create-8mgm4\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082340 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-operator-scripts\") pod \"nova-cell1-db-create-shljw\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082358 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqdv\" (UniqueName: \"kubernetes.io/projected/d598b28f-3dbb-402d-a506-c4a4e19557b2-kube-api-access-lzqdv\") pod \"nova-cell0-1697-account-create-update-wrqjs\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.082915 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f853c1f2-58ca-4001-828d-4fc087046a68-operator-scripts\") pod \"nova-api-66ac-account-create-update-8f7pw\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.083024 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e76e0-e28f-49df-a6ea-d786696f02ff-operator-scripts\") pod \"nova-cell0-db-create-8mgm4\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.103791 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xql\" (UniqueName: \"kubernetes.io/projected/f853c1f2-58ca-4001-828d-4fc087046a68-kube-api-access-z2xql\") pod \"nova-api-66ac-account-create-update-8f7pw\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.115369 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczcq\" (UniqueName: \"kubernetes.io/projected/769e76e0-e28f-49df-a6ea-d786696f02ff-kube-api-access-gczcq\") pod \"nova-cell0-db-create-8mgm4\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.140637 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ddef-account-create-update-dhsbp"] Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.145514 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.152932 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.176584 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.182881 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ddef-account-create-update-dhsbp"] Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.183936 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d598b28f-3dbb-402d-a506-c4a4e19557b2-operator-scripts\") pod \"nova-cell0-1697-account-create-update-wrqjs\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.183988 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24llr\" (UniqueName: \"kubernetes.io/projected/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-kube-api-access-24llr\") pod \"nova-cell1-db-create-shljw\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.184048 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzn82\" (UniqueName: \"kubernetes.io/projected/9904eca9-5cc3-4395-b834-f1eb89abdc95-kube-api-access-gzn82\") pod \"nova-cell1-ddef-account-create-update-dhsbp\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.184104 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9904eca9-5cc3-4395-b834-f1eb89abdc95-operator-scripts\") pod \"nova-cell1-ddef-account-create-update-dhsbp\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.184125 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-operator-scripts\") pod \"nova-cell1-db-create-shljw\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.184142 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqdv\" (UniqueName: \"kubernetes.io/projected/d598b28f-3dbb-402d-a506-c4a4e19557b2-kube-api-access-lzqdv\") pod \"nova-cell0-1697-account-create-update-wrqjs\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.185181 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d598b28f-3dbb-402d-a506-c4a4e19557b2-operator-scripts\") pod \"nova-cell0-1697-account-create-update-wrqjs\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.185837 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-operator-scripts\") pod \"nova-cell1-db-create-shljw\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.186166 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.211892 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqdv\" (UniqueName: \"kubernetes.io/projected/d598b28f-3dbb-402d-a506-c4a4e19557b2-kube-api-access-lzqdv\") pod \"nova-cell0-1697-account-create-update-wrqjs\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.218869 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.225934 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24llr\" (UniqueName: \"kubernetes.io/projected/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-kube-api-access-24llr\") pod \"nova-cell1-db-create-shljw\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.304587 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzn82\" (UniqueName: \"kubernetes.io/projected/9904eca9-5cc3-4395-b834-f1eb89abdc95-kube-api-access-gzn82\") pod \"nova-cell1-ddef-account-create-update-dhsbp\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.305024 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9904eca9-5cc3-4395-b834-f1eb89abdc95-operator-scripts\") pod \"nova-cell1-ddef-account-create-update-dhsbp\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.305917 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.306596 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9904eca9-5cc3-4395-b834-f1eb89abdc95-operator-scripts\") pod \"nova-cell1-ddef-account-create-update-dhsbp\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.327264 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.337214 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzn82\" (UniqueName: \"kubernetes.io/projected/9904eca9-5cc3-4395-b834-f1eb89abdc95-kube-api-access-gzn82\") pod \"nova-cell1-ddef-account-create-update-dhsbp\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.628901 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.721868 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.721978 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:27:51 crc kubenswrapper[4680]: I0126 16:27:51.752212 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.013820 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8mgm4"] Jan 26 16:27:52 crc kubenswrapper[4680]: W0126 16:27:52.022567 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4656d6b_2378_4862_a99c_95a0836df0a4.slice/crio-675cb03ac5516c5f225f99e874ad5e4c1dc036c938dc7bfe6517a30369d48023 WatchSource:0}: Error finding container 675cb03ac5516c5f225f99e874ad5e4c1dc036c938dc7bfe6517a30369d48023: Status 404 returned error can't find the container with id 675cb03ac5516c5f225f99e874ad5e4c1dc036c938dc7bfe6517a30369d48023 Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.039620 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nrvwr"] Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.054641 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-66ac-account-create-update-8f7pw"] Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.287160 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-shljw"] Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.317369 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1697-account-create-update-wrqjs"] Jan 26 16:27:52 crc kubenswrapper[4680]: W0126 16:27:52.352865 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd598b28f_3dbb_402d_a506_c4a4e19557b2.slice/crio-f8177ae450b3385d8f9706493f877f0a244f65bc372a2bf16504d3f03605f75f WatchSource:0}: Error finding container f8177ae450b3385d8f9706493f877f0a244f65bc372a2bf16504d3f03605f75f: Status 404 returned error can't find the container with id f8177ae450b3385d8f9706493f877f0a244f65bc372a2bf16504d3f03605f75f Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.495555 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ddef-account-create-update-dhsbp"] Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.819994 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-shljw" event={"ID":"57c8fbd5-da3a-471b-8c0d-64f580b6c89e","Type":"ContainerStarted","Data":"fc1b94356735a44d1d9873115f8f4aaee27a6d6a41d1f2f223a4b3748e668b17"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.820043 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-shljw" event={"ID":"57c8fbd5-da3a-471b-8c0d-64f580b6c89e","Type":"ContainerStarted","Data":"b04eedee7a3fd420ef9f0346d4033aec852f24fe173807a4a21d097b1c584251"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.823372 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" event={"ID":"9904eca9-5cc3-4395-b834-f1eb89abdc95","Type":"ContainerStarted","Data":"0314149fa97cd4d3b0b890a217e8fb31a6e6a1acef7fe9ff18a23ec961c01583"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.830302 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mgm4" event={"ID":"769e76e0-e28f-49df-a6ea-d786696f02ff","Type":"ContainerStarted","Data":"10f2c67a6b7581065cd96c2a8d60122ed50c4f1cc15660fb234b3e0b91a96bdb"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.830485 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mgm4" event={"ID":"769e76e0-e28f-49df-a6ea-d786696f02ff","Type":"ContainerStarted","Data":"0362b54cf90552ae3bca1c0096ea6edaebe1adbb2b735d9f6654d0af28691007"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.835530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" event={"ID":"d598b28f-3dbb-402d-a506-c4a4e19557b2","Type":"ContainerStarted","Data":"b463d9643621f1dd2d48656d6e2166ff927ec9ad1dab00d4d69c4ee29afd9f38"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.835574 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" event={"ID":"d598b28f-3dbb-402d-a506-c4a4e19557b2","Type":"ContainerStarted","Data":"f8177ae450b3385d8f9706493f877f0a244f65bc372a2bf16504d3f03605f75f"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.852175 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-shljw" podStartSLOduration=2.8521538680000003 podStartE2EDuration="2.852153868s" podCreationTimestamp="2026-01-26 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:52.839500131 +0000 UTC m=+1348.000772400" watchObservedRunningTime="2026-01-26 16:27:52.852153868 +0000 UTC m=+1348.013426137" Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.855564 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nrvwr" event={"ID":"f4656d6b-2378-4862-a99c-95a0836df0a4","Type":"ContainerStarted","Data":"72f877ec0cbeb31efb4dd542d649a5457577af97b31b1e0d96ec497c5dd4e8ef"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.855614 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nrvwr" event={"ID":"f4656d6b-2378-4862-a99c-95a0836df0a4","Type":"ContainerStarted","Data":"675cb03ac5516c5f225f99e874ad5e4c1dc036c938dc7bfe6517a30369d48023"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.862643 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-8mgm4" podStartSLOduration=2.862625203 podStartE2EDuration="2.862625203s" podCreationTimestamp="2026-01-26 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:52.856355946 +0000 UTC m=+1348.017628225" watchObservedRunningTime="2026-01-26 16:27:52.862625203 +0000 UTC m=+1348.023897472" Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.874234 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-66ac-account-create-update-8f7pw" event={"ID":"f853c1f2-58ca-4001-828d-4fc087046a68","Type":"ContainerStarted","Data":"c96babf6006fd7626dbc1566134397a80f806ddd9203379fe6ac4640b00053eb"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.874279 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-66ac-account-create-update-8f7pw" event={"ID":"f853c1f2-58ca-4001-828d-4fc087046a68","Type":"ContainerStarted","Data":"2df62c3e310b07221d3f73efc3c008739eea740d9b5e48bf45dca9f7a8f0a2ed"} Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.885648 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" podStartSLOduration=2.885629352 podStartE2EDuration="2.885629352s" podCreationTimestamp="2026-01-26 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:52.880473586 +0000 UTC m=+1348.041745855" watchObservedRunningTime="2026-01-26 16:27:52.885629352 +0000 UTC m=+1348.046901621" Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.959463 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" podStartSLOduration=1.959446613 podStartE2EDuration="1.959446613s" podCreationTimestamp="2026-01-26 16:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:52.910305488 +0000 UTC m=+1348.071577747" watchObservedRunningTime="2026-01-26 16:27:52.959446613 +0000 UTC m=+1348.120718882" Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.984365 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-nrvwr" podStartSLOduration=2.984344424 podStartE2EDuration="2.984344424s" podCreationTimestamp="2026-01-26 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:52.93131732 +0000 UTC m=+1348.092589589" watchObservedRunningTime="2026-01-26 16:27:52.984344424 +0000 UTC m=+1348.145616693" Jan 26 16:27:52 crc kubenswrapper[4680]: I0126 16:27:52.985179 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-66ac-account-create-update-8f7pw" podStartSLOduration=2.985170038 podStartE2EDuration="2.985170038s" podCreationTimestamp="2026-01-26 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:27:52.949789111 +0000 UTC m=+1348.111061380" watchObservedRunningTime="2026-01-26 16:27:52.985170038 +0000 UTC m=+1348.146442307" Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.891314 4680 generic.go:334] "Generic (PLEG): container finished" podID="57c8fbd5-da3a-471b-8c0d-64f580b6c89e" containerID="fc1b94356735a44d1d9873115f8f4aaee27a6d6a41d1f2f223a4b3748e668b17" exitCode=0 Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.891555 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-shljw" event={"ID":"57c8fbd5-da3a-471b-8c0d-64f580b6c89e","Type":"ContainerDied","Data":"fc1b94356735a44d1d9873115f8f4aaee27a6d6a41d1f2f223a4b3748e668b17"} Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.894063 4680 generic.go:334] "Generic (PLEG): container finished" podID="9904eca9-5cc3-4395-b834-f1eb89abdc95" containerID="05211c639382d5429aa8b68ca5e0718e4d48dfd278dc0b9aca3b0f143482182c" exitCode=0 Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.894123 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" event={"ID":"9904eca9-5cc3-4395-b834-f1eb89abdc95","Type":"ContainerDied","Data":"05211c639382d5429aa8b68ca5e0718e4d48dfd278dc0b9aca3b0f143482182c"} Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.898353 4680 generic.go:334] "Generic (PLEG): container finished" podID="769e76e0-e28f-49df-a6ea-d786696f02ff" containerID="10f2c67a6b7581065cd96c2a8d60122ed50c4f1cc15660fb234b3e0b91a96bdb" exitCode=0 Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.898441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mgm4" event={"ID":"769e76e0-e28f-49df-a6ea-d786696f02ff","Type":"ContainerDied","Data":"10f2c67a6b7581065cd96c2a8d60122ed50c4f1cc15660fb234b3e0b91a96bdb"} Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.900484 4680 generic.go:334] "Generic (PLEG): container finished" podID="d598b28f-3dbb-402d-a506-c4a4e19557b2" containerID="b463d9643621f1dd2d48656d6e2166ff927ec9ad1dab00d4d69c4ee29afd9f38" exitCode=0 Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.900542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" event={"ID":"d598b28f-3dbb-402d-a506-c4a4e19557b2","Type":"ContainerDied","Data":"b463d9643621f1dd2d48656d6e2166ff927ec9ad1dab00d4d69c4ee29afd9f38"} Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.902336 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4656d6b-2378-4862-a99c-95a0836df0a4" containerID="72f877ec0cbeb31efb4dd542d649a5457577af97b31b1e0d96ec497c5dd4e8ef" exitCode=0 Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.902427 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nrvwr" event={"ID":"f4656d6b-2378-4862-a99c-95a0836df0a4","Type":"ContainerDied","Data":"72f877ec0cbeb31efb4dd542d649a5457577af97b31b1e0d96ec497c5dd4e8ef"} Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.906062 4680 generic.go:334] "Generic (PLEG): container finished" podID="f853c1f2-58ca-4001-828d-4fc087046a68" containerID="c96babf6006fd7626dbc1566134397a80f806ddd9203379fe6ac4640b00053eb" exitCode=0 Jan 26 16:27:53 crc kubenswrapper[4680]: I0126 16:27:53.906212 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-66ac-account-create-update-8f7pw" event={"ID":"f853c1f2-58ca-4001-828d-4fc087046a68","Type":"ContainerDied","Data":"c96babf6006fd7626dbc1566134397a80f806ddd9203379fe6ac4640b00053eb"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.572969 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.642587 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.667656 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.680103 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.684341 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.690819 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.740163 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24llr\" (UniqueName: \"kubernetes.io/projected/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-kube-api-access-24llr\") pod \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.740289 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-operator-scripts\") pod \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\" (UID: \"57c8fbd5-da3a-471b-8c0d-64f580b6c89e\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.742368 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57c8fbd5-da3a-471b-8c0d-64f580b6c89e" (UID: "57c8fbd5-da3a-471b-8c0d-64f580b6c89e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.749159 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-kube-api-access-24llr" (OuterVolumeSpecName: "kube-api-access-24llr") pod "57c8fbd5-da3a-471b-8c0d-64f580b6c89e" (UID: "57c8fbd5-da3a-471b-8c0d-64f580b6c89e"). InnerVolumeSpecName "kube-api-access-24llr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842041 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczcq\" (UniqueName: \"kubernetes.io/projected/769e76e0-e28f-49df-a6ea-d786696f02ff-kube-api-access-gczcq\") pod \"769e76e0-e28f-49df-a6ea-d786696f02ff\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842116 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e76e0-e28f-49df-a6ea-d786696f02ff-operator-scripts\") pod \"769e76e0-e28f-49df-a6ea-d786696f02ff\" (UID: \"769e76e0-e28f-49df-a6ea-d786696f02ff\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842184 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2xql\" (UniqueName: \"kubernetes.io/projected/f853c1f2-58ca-4001-828d-4fc087046a68-kube-api-access-z2xql\") pod \"f853c1f2-58ca-4001-828d-4fc087046a68\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842240 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9904eca9-5cc3-4395-b834-f1eb89abdc95-operator-scripts\") pod \"9904eca9-5cc3-4395-b834-f1eb89abdc95\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842335 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d598b28f-3dbb-402d-a506-c4a4e19557b2-operator-scripts\") pod \"d598b28f-3dbb-402d-a506-c4a4e19557b2\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842366 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzqdv\" (UniqueName: \"kubernetes.io/projected/d598b28f-3dbb-402d-a506-c4a4e19557b2-kube-api-access-lzqdv\") pod \"d598b28f-3dbb-402d-a506-c4a4e19557b2\" (UID: \"d598b28f-3dbb-402d-a506-c4a4e19557b2\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842392 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f853c1f2-58ca-4001-828d-4fc087046a68-operator-scripts\") pod \"f853c1f2-58ca-4001-828d-4fc087046a68\" (UID: \"f853c1f2-58ca-4001-828d-4fc087046a68\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842449 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjjw9\" (UniqueName: \"kubernetes.io/projected/f4656d6b-2378-4862-a99c-95a0836df0a4-kube-api-access-hjjw9\") pod \"f4656d6b-2378-4862-a99c-95a0836df0a4\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842502 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzn82\" (UniqueName: \"kubernetes.io/projected/9904eca9-5cc3-4395-b834-f1eb89abdc95-kube-api-access-gzn82\") pod \"9904eca9-5cc3-4395-b834-f1eb89abdc95\" (UID: \"9904eca9-5cc3-4395-b834-f1eb89abdc95\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.842550 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4656d6b-2378-4862-a99c-95a0836df0a4-operator-scripts\") pod \"f4656d6b-2378-4862-a99c-95a0836df0a4\" (UID: \"f4656d6b-2378-4862-a99c-95a0836df0a4\") " Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.843029 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24llr\" (UniqueName: \"kubernetes.io/projected/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-kube-api-access-24llr\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.843051 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c8fbd5-da3a-471b-8c0d-64f580b6c89e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.843619 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4656d6b-2378-4862-a99c-95a0836df0a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4656d6b-2378-4862-a99c-95a0836df0a4" (UID: "f4656d6b-2378-4862-a99c-95a0836df0a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.843720 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d598b28f-3dbb-402d-a506-c4a4e19557b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d598b28f-3dbb-402d-a506-c4a4e19557b2" (UID: "d598b28f-3dbb-402d-a506-c4a4e19557b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.846879 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9904eca9-5cc3-4395-b834-f1eb89abdc95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9904eca9-5cc3-4395-b834-f1eb89abdc95" (UID: "9904eca9-5cc3-4395-b834-f1eb89abdc95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.846950 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f853c1f2-58ca-4001-828d-4fc087046a68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f853c1f2-58ca-4001-828d-4fc087046a68" (UID: "f853c1f2-58ca-4001-828d-4fc087046a68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.847509 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769e76e0-e28f-49df-a6ea-d786696f02ff-kube-api-access-gczcq" (OuterVolumeSpecName: "kube-api-access-gczcq") pod "769e76e0-e28f-49df-a6ea-d786696f02ff" (UID: "769e76e0-e28f-49df-a6ea-d786696f02ff"). InnerVolumeSpecName "kube-api-access-gczcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.847703 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769e76e0-e28f-49df-a6ea-d786696f02ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "769e76e0-e28f-49df-a6ea-d786696f02ff" (UID: "769e76e0-e28f-49df-a6ea-d786696f02ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.849917 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f853c1f2-58ca-4001-828d-4fc087046a68-kube-api-access-z2xql" (OuterVolumeSpecName: "kube-api-access-z2xql") pod "f853c1f2-58ca-4001-828d-4fc087046a68" (UID: "f853c1f2-58ca-4001-828d-4fc087046a68"). InnerVolumeSpecName "kube-api-access-z2xql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.850173 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d598b28f-3dbb-402d-a506-c4a4e19557b2-kube-api-access-lzqdv" (OuterVolumeSpecName: "kube-api-access-lzqdv") pod "d598b28f-3dbb-402d-a506-c4a4e19557b2" (UID: "d598b28f-3dbb-402d-a506-c4a4e19557b2"). InnerVolumeSpecName "kube-api-access-lzqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.850234 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4656d6b-2378-4862-a99c-95a0836df0a4-kube-api-access-hjjw9" (OuterVolumeSpecName: "kube-api-access-hjjw9") pod "f4656d6b-2378-4862-a99c-95a0836df0a4" (UID: "f4656d6b-2378-4862-a99c-95a0836df0a4"). InnerVolumeSpecName "kube-api-access-hjjw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.856267 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9904eca9-5cc3-4395-b834-f1eb89abdc95-kube-api-access-gzn82" (OuterVolumeSpecName: "kube-api-access-gzn82") pod "9904eca9-5cc3-4395-b834-f1eb89abdc95" (UID: "9904eca9-5cc3-4395-b834-f1eb89abdc95"). InnerVolumeSpecName "kube-api-access-gzn82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.924189 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mgm4" event={"ID":"769e76e0-e28f-49df-a6ea-d786696f02ff","Type":"ContainerDied","Data":"0362b54cf90552ae3bca1c0096ea6edaebe1adbb2b735d9f6654d0af28691007"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.924229 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0362b54cf90552ae3bca1c0096ea6edaebe1adbb2b735d9f6654d0af28691007" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.924285 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mgm4" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.927433 4680 generic.go:334] "Generic (PLEG): container finished" podID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerID="d742cdda8a8dc8549e8d05b56d22ab900385caaaf38f775d5c30bb7cab6cfbcd" exitCode=137 Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.927513 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerDied","Data":"d742cdda8a8dc8549e8d05b56d22ab900385caaaf38f775d5c30bb7cab6cfbcd"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.927554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerStarted","Data":"b4675cf261bba8b67434d4c8ef50209c8b8e949c2c7040b35f76bfb3fc7d8240"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.927575 4680 scope.go:117] "RemoveContainer" containerID="e75f034b772315c38ada5902c9682b54464ec4bd0d4a023917a6ced3a1564c93" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.940010 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" event={"ID":"d598b28f-3dbb-402d-a506-c4a4e19557b2","Type":"ContainerDied","Data":"f8177ae450b3385d8f9706493f877f0a244f65bc372a2bf16504d3f03605f75f"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948164 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8177ae450b3385d8f9706493f877f0a244f65bc372a2bf16504d3f03605f75f" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.945478 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e76e0-e28f-49df-a6ea-d786696f02ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948218 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2xql\" (UniqueName: \"kubernetes.io/projected/f853c1f2-58ca-4001-828d-4fc087046a68-kube-api-access-z2xql\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948232 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9904eca9-5cc3-4395-b834-f1eb89abdc95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948244 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d598b28f-3dbb-402d-a506-c4a4e19557b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.940182 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1697-account-create-update-wrqjs" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948255 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzqdv\" (UniqueName: \"kubernetes.io/projected/d598b28f-3dbb-402d-a506-c4a4e19557b2-kube-api-access-lzqdv\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948741 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f853c1f2-58ca-4001-828d-4fc087046a68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948864 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjjw9\" (UniqueName: \"kubernetes.io/projected/f4656d6b-2378-4862-a99c-95a0836df0a4-kube-api-access-hjjw9\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948897 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzn82\" (UniqueName: \"kubernetes.io/projected/9904eca9-5cc3-4395-b834-f1eb89abdc95-kube-api-access-gzn82\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948909 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4656d6b-2378-4862-a99c-95a0836df0a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.948920 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczcq\" (UniqueName: \"kubernetes.io/projected/769e76e0-e28f-49df-a6ea-d786696f02ff-kube-api-access-gczcq\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.950469 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nrvwr" event={"ID":"f4656d6b-2378-4862-a99c-95a0836df0a4","Type":"ContainerDied","Data":"675cb03ac5516c5f225f99e874ad5e4c1dc036c938dc7bfe6517a30369d48023"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.950502 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nrvwr" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.950513 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675cb03ac5516c5f225f99e874ad5e4c1dc036c938dc7bfe6517a30369d48023" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.961040 4680 generic.go:334] "Generic (PLEG): container finished" podID="34651440-00a2-4b50-a6cc-a0230d4def92" containerID="c69624d2bcc285b657d662c97a87069e6ddd188655dd4cffa20769a64bbb9a15" exitCode=137 Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.961140 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerDied","Data":"c69624d2bcc285b657d662c97a87069e6ddd188655dd4cffa20769a64bbb9a15"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.961165 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8657f7848d-ls2sv" event={"ID":"34651440-00a2-4b50-a6cc-a0230d4def92","Type":"ContainerStarted","Data":"80620f1fa932172d7688c0c2799fe3557b29d02414a23f1f7d785692710dfe69"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.968700 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-66ac-account-create-update-8f7pw" event={"ID":"f853c1f2-58ca-4001-828d-4fc087046a68","Type":"ContainerDied","Data":"2df62c3e310b07221d3f73efc3c008739eea740d9b5e48bf45dca9f7a8f0a2ed"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.968736 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df62c3e310b07221d3f73efc3c008739eea740d9b5e48bf45dca9f7a8f0a2ed" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.968794 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-66ac-account-create-update-8f7pw" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.971479 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-shljw" event={"ID":"57c8fbd5-da3a-471b-8c0d-64f580b6c89e","Type":"ContainerDied","Data":"b04eedee7a3fd420ef9f0346d4033aec852f24fe173807a4a21d097b1c584251"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.971518 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04eedee7a3fd420ef9f0346d4033aec852f24fe173807a4a21d097b1c584251" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.971587 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-shljw" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.981927 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.982572 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddef-account-create-update-dhsbp" event={"ID":"9904eca9-5cc3-4395-b834-f1eb89abdc95","Type":"ContainerDied","Data":"0314149fa97cd4d3b0b890a217e8fb31a6e6a1acef7fe9ff18a23ec961c01583"} Jan 26 16:27:55 crc kubenswrapper[4680]: I0126 16:27:55.982610 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0314149fa97cd4d3b0b890a217e8fb31a6e6a1acef7fe9ff18a23ec961c01583" Jan 26 16:27:56 crc kubenswrapper[4680]: I0126 16:27:56.182563 4680 scope.go:117] "RemoveContainer" containerID="f7c9019de00f5906ef764fd80fe6b9342299dd73c58ad71076ff33557704fd7c" Jan 26 16:27:57 crc kubenswrapper[4680]: I0126 16:27:57.909617 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.030508 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ef76123-affe-4881-a16b-d070391cafee" containerID="c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb" exitCode=0 Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.030553 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerDied","Data":"c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb"} Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.030581 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef76123-affe-4881-a16b-d070391cafee","Type":"ContainerDied","Data":"f98400de191fcbf28265ab9daf2da6c1f032c1dde13f183171077d3218ddc52d"} Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.030597 4680 scope.go:117] "RemoveContainer" containerID="eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.030704 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087238 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-log-httpd\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087330 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-scripts\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087368 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-combined-ca-bundle\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087432 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-run-httpd\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087485 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-config-data\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087547 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqqk8\" (UniqueName: \"kubernetes.io/projected/3ef76123-affe-4881-a16b-d070391cafee-kube-api-access-hqqk8\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.087566 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-sg-core-conf-yaml\") pod \"3ef76123-affe-4881-a16b-d070391cafee\" (UID: \"3ef76123-affe-4881-a16b-d070391cafee\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.094564 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.096672 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.124227 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef76123-affe-4881-a16b-d070391cafee-kube-api-access-hqqk8" (OuterVolumeSpecName: "kube-api-access-hqqk8") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "kube-api-access-hqqk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.136616 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-scripts" (OuterVolumeSpecName: "scripts") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.152514 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.189706 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqqk8\" (UniqueName: \"kubernetes.io/projected/3ef76123-affe-4881-a16b-d070391cafee-kube-api-access-hqqk8\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.189744 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.189753 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.189761 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.189771 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef76123-affe-4881-a16b-d070391cafee-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.198760 4680 scope.go:117] "RemoveContainer" containerID="7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.221183 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-config-data" (OuterVolumeSpecName: "config-data") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.226364 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef76123-affe-4881-a16b-d070391cafee" (UID: "3ef76123-affe-4881-a16b-d070391cafee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.233185 4680 scope.go:117] "RemoveContainer" containerID="02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.292609 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.292654 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef76123-affe-4881-a16b-d070391cafee-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.298263 4680 scope.go:117] "RemoveContainer" containerID="c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.441799 4680 scope.go:117] "RemoveContainer" containerID="eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.446178 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef\": container with ID starting with eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef not found: ID does not exist" containerID="eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.446249 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef"} err="failed to get container status \"eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef\": rpc error: code = NotFound desc = could not find container \"eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef\": container with ID starting with eb26c07185e50a515e506aa3e2746d966649abb087db6d6398db84298cf8e2ef not found: ID does not exist" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.446282 4680 scope.go:117] "RemoveContainer" containerID="7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.448208 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1\": container with ID starting with 7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1 not found: ID does not exist" containerID="7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.448240 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1"} err="failed to get container status \"7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1\": rpc error: code = NotFound desc = could not find container \"7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1\": container with ID starting with 7873c4d78810c41008c91790bb8075cca619e9fe2b2913a540b0317d012ec9e1 not found: ID does not exist" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.448289 4680 scope.go:117] "RemoveContainer" containerID="02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.450621 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34\": container with ID starting with 02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34 not found: ID does not exist" containerID="02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.450667 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34"} err="failed to get container status \"02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34\": rpc error: code = NotFound desc = could not find container \"02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34\": container with ID starting with 02eefd5f3c697ffe7b81930e1467ef79114c42653cd6caf32e397fcb57f88b34 not found: ID does not exist" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.450689 4680 scope.go:117] "RemoveContainer" containerID="c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.451709 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb\": container with ID starting with c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb not found: ID does not exist" containerID="c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.451748 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb"} err="failed to get container status \"c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb\": rpc error: code = NotFound desc = could not find container \"c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb\": container with ID starting with c468ab90c7071439876f1be94ced3f2536897c13ef0e2ae4113a81fdf3a5dfbb not found: ID does not exist" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.469545 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.485367 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.504502 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.504972 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769e76e0-e28f-49df-a6ea-d786696f02ff" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.504985 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="769e76e0-e28f-49df-a6ea-d786696f02ff" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.504998 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="proxy-httpd" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505006 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="proxy-httpd" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505015 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904eca9-5cc3-4395-b834-f1eb89abdc95" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505021 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904eca9-5cc3-4395-b834-f1eb89abdc95" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505034 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-central-agent" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505040 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-central-agent" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505048 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4656d6b-2378-4862-a99c-95a0836df0a4" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505054 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4656d6b-2378-4862-a99c-95a0836df0a4" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505062 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f853c1f2-58ca-4001-828d-4fc087046a68" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505082 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f853c1f2-58ca-4001-828d-4fc087046a68" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505093 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="sg-core" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505099 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="sg-core" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505124 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c8fbd5-da3a-471b-8c0d-64f580b6c89e" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505137 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c8fbd5-da3a-471b-8c0d-64f580b6c89e" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505146 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-notification-agent" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505152 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-notification-agent" Jan 26 16:27:58 crc kubenswrapper[4680]: E0126 16:27:58.505166 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d598b28f-3dbb-402d-a506-c4a4e19557b2" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505172 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d598b28f-3dbb-402d-a506-c4a4e19557b2" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505357 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="769e76e0-e28f-49df-a6ea-d786696f02ff" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505371 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="proxy-httpd" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505384 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c8fbd5-da3a-471b-8c0d-64f580b6c89e" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505393 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-notification-agent" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505401 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="ceilometer-central-agent" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505410 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d598b28f-3dbb-402d-a506-c4a4e19557b2" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505420 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef76123-affe-4881-a16b-d070391cafee" containerName="sg-core" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505427 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4656d6b-2378-4862-a99c-95a0836df0a4" containerName="mariadb-database-create" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505438 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9904eca9-5cc3-4395-b834-f1eb89abdc95" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.505446 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f853c1f2-58ca-4001-828d-4fc087046a68" containerName="mariadb-account-create-update" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.509021 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.512627 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.515592 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.560885 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603092 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603158 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-log-httpd\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603229 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-scripts\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603298 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfcb\" (UniqueName: \"kubernetes.io/projected/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-kube-api-access-7qfcb\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603333 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603376 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-config-data\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.603421 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-run-httpd\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.706459 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-run-httpd\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.706739 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.706883 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-log-httpd\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.707044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-scripts\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.707199 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfcb\" (UniqueName: \"kubernetes.io/projected/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-kube-api-access-7qfcb\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.707362 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.707599 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-config-data\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.708214 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-run-httpd\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.710675 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-log-httpd\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.714274 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-scripts\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.719623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.722301 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.727486 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-config-data\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.730894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfcb\" (UniqueName: \"kubernetes.io/projected/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-kube-api-access-7qfcb\") pod \"ceilometer-0\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.819937 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.835244 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.921476 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.921772 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-combined-ca-bundle\") pod \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.921818 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data-custom\") pod \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.921976 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data\") pod \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.922024 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlrqk\" (UniqueName: \"kubernetes.io/projected/8f0cd221-e6ba-4921-9c2b-49e6424cd321-kube-api-access-wlrqk\") pod \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\" (UID: \"8f0cd221-e6ba-4921-9c2b-49e6424cd321\") " Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.930183 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f0cd221-e6ba-4921-9c2b-49e6424cd321" (UID: "8f0cd221-e6ba-4921-9c2b-49e6424cd321"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.936209 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0cd221-e6ba-4921-9c2b-49e6424cd321-kube-api-access-wlrqk" (OuterVolumeSpecName: "kube-api-access-wlrqk") pod "8f0cd221-e6ba-4921-9c2b-49e6424cd321" (UID: "8f0cd221-e6ba-4921-9c2b-49e6424cd321"). InnerVolumeSpecName "kube-api-access-wlrqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:58 crc kubenswrapper[4680]: I0126 16:27:58.974261 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f0cd221-e6ba-4921-9c2b-49e6424cd321" (UID: "8f0cd221-e6ba-4921-9c2b-49e6424cd321"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.023060 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data" (OuterVolumeSpecName: "config-data") pod "8f0cd221-e6ba-4921-9c2b-49e6424cd321" (UID: "8f0cd221-e6ba-4921-9c2b-49e6424cd321"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.023452 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-combined-ca-bundle\") pod \"65bc9e91-e18f-4da0-a068-1a2f5199068f\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.023540 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vg7r\" (UniqueName: \"kubernetes.io/projected/65bc9e91-e18f-4da0-a068-1a2f5199068f-kube-api-access-2vg7r\") pod \"65bc9e91-e18f-4da0-a068-1a2f5199068f\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.023602 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data-custom\") pod \"65bc9e91-e18f-4da0-a068-1a2f5199068f\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.023648 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data\") pod \"65bc9e91-e18f-4da0-a068-1a2f5199068f\" (UID: \"65bc9e91-e18f-4da0-a068-1a2f5199068f\") " Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.026034 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.026082 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.026095 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0cd221-e6ba-4921-9c2b-49e6424cd321-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.026105 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlrqk\" (UniqueName: \"kubernetes.io/projected/8f0cd221-e6ba-4921-9c2b-49e6424cd321-kube-api-access-wlrqk\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.028178 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "65bc9e91-e18f-4da0-a068-1a2f5199068f" (UID: "65bc9e91-e18f-4da0-a068-1a2f5199068f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.044969 4680 generic.go:334] "Generic (PLEG): container finished" podID="65bc9e91-e18f-4da0-a068-1a2f5199068f" containerID="576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811" exitCode=137 Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.045274 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57966556c4-5mgs4" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.045297 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57966556c4-5mgs4" event={"ID":"65bc9e91-e18f-4da0-a068-1a2f5199068f","Type":"ContainerDied","Data":"576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811"} Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.045576 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57966556c4-5mgs4" event={"ID":"65bc9e91-e18f-4da0-a068-1a2f5199068f","Type":"ContainerDied","Data":"bac481a4776d7513b511f013cf28b1321ee8b679b1f5210a355ca43674e873c7"} Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.045600 4680 scope.go:117] "RemoveContainer" containerID="576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.045752 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65bc9e91-e18f-4da0-a068-1a2f5199068f-kube-api-access-2vg7r" (OuterVolumeSpecName: "kube-api-access-2vg7r") pod "65bc9e91-e18f-4da0-a068-1a2f5199068f" (UID: "65bc9e91-e18f-4da0-a068-1a2f5199068f"). InnerVolumeSpecName "kube-api-access-2vg7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.053688 4680 generic.go:334] "Generic (PLEG): container finished" podID="8f0cd221-e6ba-4921-9c2b-49e6424cd321" containerID="e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a" exitCode=137 Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.053915 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85875d88f7-m4tq6" event={"ID":"8f0cd221-e6ba-4921-9c2b-49e6424cd321","Type":"ContainerDied","Data":"e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a"} Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.054026 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85875d88f7-m4tq6" event={"ID":"8f0cd221-e6ba-4921-9c2b-49e6424cd321","Type":"ContainerDied","Data":"a79bb6b7cc79809dd7099d3cf60e1357e18ce4403359860f7c7e78057f73cbb7"} Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.053937 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85875d88f7-m4tq6" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.055586 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65bc9e91-e18f-4da0-a068-1a2f5199068f" (UID: "65bc9e91-e18f-4da0-a068-1a2f5199068f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.097369 4680 scope.go:117] "RemoveContainer" containerID="576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811" Jan 26 16:27:59 crc kubenswrapper[4680]: E0126 16:27:59.109675 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811\": container with ID starting with 576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811 not found: ID does not exist" containerID="576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.109724 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811"} err="failed to get container status \"576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811\": rpc error: code = NotFound desc = could not find container \"576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811\": container with ID starting with 576d4a606a0b2978cdb581955b8cf19ca43b09e50193fa740b8f24b41cfd4811 not found: ID does not exist" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.109756 4680 scope.go:117] "RemoveContainer" containerID="e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.115558 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-85875d88f7-m4tq6"] Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.116892 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data" (OuterVolumeSpecName: "config-data") pod "65bc9e91-e18f-4da0-a068-1a2f5199068f" (UID: "65bc9e91-e18f-4da0-a068-1a2f5199068f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.133981 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.134018 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.134034 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vg7r\" (UniqueName: \"kubernetes.io/projected/65bc9e91-e18f-4da0-a068-1a2f5199068f-kube-api-access-2vg7r\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.134043 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65bc9e91-e18f-4da0-a068-1a2f5199068f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.145419 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-85875d88f7-m4tq6"] Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.150654 4680 scope.go:117] "RemoveContainer" containerID="e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a" Jan 26 16:27:59 crc kubenswrapper[4680]: E0126 16:27:59.151124 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a\": container with ID starting with e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a not found: ID does not exist" containerID="e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.151177 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a"} err="failed to get container status \"e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a\": rpc error: code = NotFound desc = could not find container \"e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a\": container with ID starting with e6297381910b3cf505d0dcc953e25b93e4e586b13b46f2b7ad82e9d42c1ddd3a not found: ID does not exist" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.192932 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef76123-affe-4881-a16b-d070391cafee" path="/var/lib/kubelet/pods/3ef76123-affe-4881-a16b-d070391cafee/volumes" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.193992 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0cd221-e6ba-4921-9c2b-49e6424cd321" path="/var/lib/kubelet/pods/8f0cd221-e6ba-4921-9c2b-49e6424cd321/volumes" Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.370844 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57966556c4-5mgs4"] Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.380131 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57966556c4-5mgs4"] Jan 26 16:27:59 crc kubenswrapper[4680]: I0126 16:27:59.501107 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:27:59 crc kubenswrapper[4680]: W0126 16:27:59.502097 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0ce096_a04e_4cd1_b821_c7e1c9b48393.slice/crio-df74fa83dd03c689221b2ccf7130e08cf92ad2308b33739e09cdfad94fbc8123 WatchSource:0}: Error finding container df74fa83dd03c689221b2ccf7130e08cf92ad2308b33739e09cdfad94fbc8123: Status 404 returned error can't find the container with id df74fa83dd03c689221b2ccf7130e08cf92ad2308b33739e09cdfad94fbc8123 Jan 26 16:28:00 crc kubenswrapper[4680]: I0126 16:28:00.064373 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerStarted","Data":"5a1cfaf7b588ca7766237c6bac800012be64aa0140a663966c12d1d91c286a14"} Jan 26 16:28:00 crc kubenswrapper[4680]: I0126 16:28:00.064931 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerStarted","Data":"5b413d9facd184f9915c31138152ecf8e514e941567a536fbd131e392ddff8ba"} Jan 26 16:28:00 crc kubenswrapper[4680]: I0126 16:28:00.064950 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerStarted","Data":"df74fa83dd03c689221b2ccf7130e08cf92ad2308b33739e09cdfad94fbc8123"} Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.182288 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65bc9e91-e18f-4da0-a068-1a2f5199068f" path="/var/lib/kubelet/pods/65bc9e91-e18f-4da0-a068-1a2f5199068f/volumes" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.353847 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hj79q"] Jan 26 16:28:01 crc kubenswrapper[4680]: E0126 16:28:01.354284 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0cd221-e6ba-4921-9c2b-49e6424cd321" containerName="heat-api" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.354303 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0cd221-e6ba-4921-9c2b-49e6424cd321" containerName="heat-api" Jan 26 16:28:01 crc kubenswrapper[4680]: E0126 16:28:01.354317 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65bc9e91-e18f-4da0-a068-1a2f5199068f" containerName="heat-cfnapi" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.354323 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="65bc9e91-e18f-4da0-a068-1a2f5199068f" containerName="heat-cfnapi" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.354500 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0cd221-e6ba-4921-9c2b-49e6424cd321" containerName="heat-api" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.354517 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="65bc9e91-e18f-4da0-a068-1a2f5199068f" containerName="heat-cfnapi" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.355109 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.356814 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.357183 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.360381 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-29w67" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.365909 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hj79q"] Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.478066 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.478485 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-config-data\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.478579 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-scripts\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.478686 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h77d\" (UniqueName: \"kubernetes.io/projected/0092c6fa-128e-474b-b8d0-379592af1dc2-kube-api-access-6h77d\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.580177 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.580253 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-config-data\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.580286 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-scripts\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.580343 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h77d\" (UniqueName: \"kubernetes.io/projected/0092c6fa-128e-474b-b8d0-379592af1dc2-kube-api-access-6h77d\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.586047 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-scripts\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.586232 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.593790 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-config-data\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.854744 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h77d\" (UniqueName: \"kubernetes.io/projected/0092c6fa-128e-474b-b8d0-379592af1dc2-kube-api-access-6h77d\") pod \"nova-cell0-conductor-db-sync-hj79q\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:01 crc kubenswrapper[4680]: I0126 16:28:01.982371 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:02 crc kubenswrapper[4680]: I0126 16:28:02.102975 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerStarted","Data":"04305e448433871b2b89eae828a6207a87196b8f963a0a473baf18275413007a"} Jan 26 16:28:02 crc kubenswrapper[4680]: I0126 16:28:02.564835 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hj79q"] Jan 26 16:28:03 crc kubenswrapper[4680]: I0126 16:28:03.117809 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hj79q" event={"ID":"0092c6fa-128e-474b-b8d0-379592af1dc2","Type":"ContainerStarted","Data":"81bb780866640cab00576c52eea7a8ce50dd55586cd7970be7737ae9bc9d7cf0"} Jan 26 16:28:04 crc kubenswrapper[4680]: I0126 16:28:04.132226 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerStarted","Data":"0f10e6081ebc4ea04b6b084eaec91dc3348718cc4bd00275ea0e0c12ef7c62e0"} Jan 26 16:28:04 crc kubenswrapper[4680]: I0126 16:28:04.133946 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:28:04 crc kubenswrapper[4680]: I0126 16:28:04.162948 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.81101212 podStartE2EDuration="6.162929925s" podCreationTimestamp="2026-01-26 16:27:58 +0000 UTC" firstStartedPulling="2026-01-26 16:27:59.505739421 +0000 UTC m=+1354.667011690" lastFinishedPulling="2026-01-26 16:28:02.857657226 +0000 UTC m=+1358.018929495" observedRunningTime="2026-01-26 16:28:04.156349129 +0000 UTC m=+1359.317621398" watchObservedRunningTime="2026-01-26 16:28:04.162929925 +0000 UTC m=+1359.324202194" Jan 26 16:28:05 crc kubenswrapper[4680]: I0126 16:28:05.113325 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:28:05 crc kubenswrapper[4680]: I0126 16:28:05.113369 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:28:05 crc kubenswrapper[4680]: I0126 16:28:05.114973 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:28:05 crc kubenswrapper[4680]: I0126 16:28:05.343382 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:28:05 crc kubenswrapper[4680]: I0126 16:28:05.343448 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:28:05 crc kubenswrapper[4680]: I0126 16:28:05.345299 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:28:10 crc kubenswrapper[4680]: I0126 16:28:10.367756 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:10 crc kubenswrapper[4680]: I0126 16:28:10.371646 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-central-agent" containerID="cri-o://5b413d9facd184f9915c31138152ecf8e514e941567a536fbd131e392ddff8ba" gracePeriod=30 Jan 26 16:28:10 crc kubenswrapper[4680]: I0126 16:28:10.371805 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="proxy-httpd" containerID="cri-o://0f10e6081ebc4ea04b6b084eaec91dc3348718cc4bd00275ea0e0c12ef7c62e0" gracePeriod=30 Jan 26 16:28:10 crc kubenswrapper[4680]: I0126 16:28:10.371857 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="sg-core" containerID="cri-o://04305e448433871b2b89eae828a6207a87196b8f963a0a473baf18275413007a" gracePeriod=30 Jan 26 16:28:10 crc kubenswrapper[4680]: I0126 16:28:10.371902 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-notification-agent" containerID="cri-o://5a1cfaf7b588ca7766237c6bac800012be64aa0140a663966c12d1d91c286a14" gracePeriod=30 Jan 26 16:28:11 crc kubenswrapper[4680]: I0126 16:28:11.251135 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerID="0f10e6081ebc4ea04b6b084eaec91dc3348718cc4bd00275ea0e0c12ef7c62e0" exitCode=0 Jan 26 16:28:11 crc kubenswrapper[4680]: I0126 16:28:11.251434 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerID="04305e448433871b2b89eae828a6207a87196b8f963a0a473baf18275413007a" exitCode=2 Jan 26 16:28:11 crc kubenswrapper[4680]: I0126 16:28:11.251447 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerID="5a1cfaf7b588ca7766237c6bac800012be64aa0140a663966c12d1d91c286a14" exitCode=0 Jan 26 16:28:11 crc kubenswrapper[4680]: I0126 16:28:11.251472 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerDied","Data":"0f10e6081ebc4ea04b6b084eaec91dc3348718cc4bd00275ea0e0c12ef7c62e0"} Jan 26 16:28:11 crc kubenswrapper[4680]: I0126 16:28:11.251506 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerDied","Data":"04305e448433871b2b89eae828a6207a87196b8f963a0a473baf18275413007a"} Jan 26 16:28:11 crc kubenswrapper[4680]: I0126 16:28:11.251520 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerDied","Data":"5a1cfaf7b588ca7766237c6bac800012be64aa0140a663966c12d1d91c286a14"} Jan 26 16:28:13 crc kubenswrapper[4680]: I0126 16:28:13.297795 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerID="5b413d9facd184f9915c31138152ecf8e514e941567a536fbd131e392ddff8ba" exitCode=0 Jan 26 16:28:13 crc kubenswrapper[4680]: I0126 16:28:13.301151 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerDied","Data":"5b413d9facd184f9915c31138152ecf8e514e941567a536fbd131e392ddff8ba"} Jan 26 16:28:13 crc kubenswrapper[4680]: I0126 16:28:13.948860 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.113867 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-run-httpd\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.114531 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.114720 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qfcb\" (UniqueName: \"kubernetes.io/projected/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-kube-api-access-7qfcb\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.114868 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-log-httpd\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.115003 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-sg-core-conf-yaml\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.115205 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-config-data\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.115334 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-scripts\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.115513 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-combined-ca-bundle\") pod \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\" (UID: \"eb0ce096-a04e-4cd1-b821-c7e1c9b48393\") " Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.116211 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.118467 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.121029 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.120900 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-scripts" (OuterVolumeSpecName: "scripts") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.122614 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-kube-api-access-7qfcb" (OuterVolumeSpecName: "kube-api-access-7qfcb") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "kube-api-access-7qfcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.164258 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.200865 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.218383 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-config-data" (OuterVolumeSpecName: "config-data") pod "eb0ce096-a04e-4cd1-b821-c7e1c9b48393" (UID: "eb0ce096-a04e-4cd1-b821-c7e1c9b48393"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.223538 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qfcb\" (UniqueName: \"kubernetes.io/projected/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-kube-api-access-7qfcb\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.223806 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.223892 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.223999 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.224167 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ce096-a04e-4cd1-b821-c7e1c9b48393-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.310770 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hj79q" event={"ID":"0092c6fa-128e-474b-b8d0-379592af1dc2","Type":"ContainerStarted","Data":"d185c77b64e6badf59bf2600c5dc6d637ba6d226bcff51418270e12c22a12ce5"} Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.317731 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb0ce096-a04e-4cd1-b821-c7e1c9b48393","Type":"ContainerDied","Data":"df74fa83dd03c689221b2ccf7130e08cf92ad2308b33739e09cdfad94fbc8123"} Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.317779 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.317781 4680 scope.go:117] "RemoveContainer" containerID="0f10e6081ebc4ea04b6b084eaec91dc3348718cc4bd00275ea0e0c12ef7c62e0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.357880 4680 scope.go:117] "RemoveContainer" containerID="04305e448433871b2b89eae828a6207a87196b8f963a0a473baf18275413007a" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.377546 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hj79q" podStartSLOduration=2.354619762 podStartE2EDuration="13.377426018s" podCreationTimestamp="2026-01-26 16:28:01 +0000 UTC" firstStartedPulling="2026-01-26 16:28:02.57128253 +0000 UTC m=+1357.732554799" lastFinishedPulling="2026-01-26 16:28:13.594088786 +0000 UTC m=+1368.755361055" observedRunningTime="2026-01-26 16:28:14.336679527 +0000 UTC m=+1369.497951796" watchObservedRunningTime="2026-01-26 16:28:14.377426018 +0000 UTC m=+1369.538698287" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.391053 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.398833 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.402881 4680 scope.go:117] "RemoveContainer" containerID="5a1cfaf7b588ca7766237c6bac800012be64aa0140a663966c12d1d91c286a14" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.407707 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:14 crc kubenswrapper[4680]: E0126 16:28:14.408187 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-notification-agent" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408206 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-notification-agent" Jan 26 16:28:14 crc kubenswrapper[4680]: E0126 16:28:14.408231 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="proxy-httpd" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408238 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="proxy-httpd" Jan 26 16:28:14 crc kubenswrapper[4680]: E0126 16:28:14.408256 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="sg-core" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408261 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="sg-core" Jan 26 16:28:14 crc kubenswrapper[4680]: E0126 16:28:14.408272 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-central-agent" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408278 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-central-agent" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408570 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-central-agent" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408599 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="sg-core" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408623 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="proxy-httpd" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.408632 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" containerName="ceilometer-notification-agent" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.410199 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.418815 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.418885 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.427700 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-log-httpd\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.428080 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-config-data\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.428250 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.428343 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d96w4\" (UniqueName: \"kubernetes.io/projected/ae15c480-9662-404f-9778-d4e130490ed0-kube-api-access-d96w4\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.428451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-run-httpd\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.428592 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-scripts\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.428729 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.456005 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.479589 4680 scope.go:117] "RemoveContainer" containerID="5b413d9facd184f9915c31138152ecf8e514e941567a536fbd131e392ddff8ba" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.530596 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-config-data\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.531751 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d96w4\" (UniqueName: \"kubernetes.io/projected/ae15c480-9662-404f-9778-d4e130490ed0-kube-api-access-d96w4\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.531786 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.531822 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-run-httpd\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.531904 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-scripts\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.531931 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.531976 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-log-httpd\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.532496 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-log-httpd\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.543676 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-run-httpd\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.555651 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-scripts\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.556509 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.556620 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-config-data\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.561678 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d96w4\" (UniqueName: \"kubernetes.io/projected/ae15c480-9662-404f-9778-d4e130490ed0-kube-api-access-d96w4\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.563679 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " pod="openstack/ceilometer-0" Jan 26 16:28:14 crc kubenswrapper[4680]: I0126 16:28:14.749227 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:15 crc kubenswrapper[4680]: I0126 16:28:15.113644 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:28:15 crc kubenswrapper[4680]: I0126 16:28:15.181051 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0ce096-a04e-4cd1-b821-c7e1c9b48393" path="/var/lib/kubelet/pods/eb0ce096-a04e-4cd1-b821-c7e1c9b48393/volumes" Jan 26 16:28:15 crc kubenswrapper[4680]: I0126 16:28:15.258328 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:15 crc kubenswrapper[4680]: I0126 16:28:15.333151 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerStarted","Data":"e0ba7af2d0390471657e4cd6a3280cfc9e7c1a815c58e502aa5dca61159f402a"} Jan 26 16:28:15 crc kubenswrapper[4680]: I0126 16:28:15.345522 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 16:28:16 crc kubenswrapper[4680]: I0126 16:28:16.339277 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerStarted","Data":"e3827ba03ef26beb26a031201ae48069d62a537f32c8efac3a113500cd2f6abc"} Jan 26 16:28:16 crc kubenswrapper[4680]: I0126 16:28:16.548276 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:17 crc kubenswrapper[4680]: I0126 16:28:17.331271 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:28:17 crc kubenswrapper[4680]: I0126 16:28:17.331337 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:28:17 crc kubenswrapper[4680]: I0126 16:28:17.331286 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:28:17 crc kubenswrapper[4680]: I0126 16:28:17.331608 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:28:19 crc kubenswrapper[4680]: I0126 16:28:19.366868 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerStarted","Data":"f371f65943070582d0ff47fa932bc7dbc200fed4fffb825a562252b5478021f7"} Jan 26 16:28:19 crc kubenswrapper[4680]: I0126 16:28:19.367375 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerStarted","Data":"0047181693e72593f82efa58bea18a1ad92ab1948c5f2b38aba7f4c5ca46f7f6"} Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.403719 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerStarted","Data":"3eb8e59b3fd0fbf140f16f4d3772294cc87951c074ae1294fdb00faac871d396"} Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.405413 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.403902 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-notification-agent" containerID="cri-o://0047181693e72593f82efa58bea18a1ad92ab1948c5f2b38aba7f4c5ca46f7f6" gracePeriod=30 Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.403927 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="sg-core" containerID="cri-o://f371f65943070582d0ff47fa932bc7dbc200fed4fffb825a562252b5478021f7" gracePeriod=30 Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.404283 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="proxy-httpd" containerID="cri-o://3eb8e59b3fd0fbf140f16f4d3772294cc87951c074ae1294fdb00faac871d396" gracePeriod=30 Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.403862 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-central-agent" containerID="cri-o://e3827ba03ef26beb26a031201ae48069d62a537f32c8efac3a113500cd2f6abc" gracePeriod=30 Jan 26 16:28:23 crc kubenswrapper[4680]: I0126 16:28:23.430845 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.752157487 podStartE2EDuration="9.430829341s" podCreationTimestamp="2026-01-26 16:28:14 +0000 UTC" firstStartedPulling="2026-01-26 16:28:15.262968994 +0000 UTC m=+1370.424241253" lastFinishedPulling="2026-01-26 16:28:20.941640838 +0000 UTC m=+1376.102913107" observedRunningTime="2026-01-26 16:28:23.429490303 +0000 UTC m=+1378.590762572" watchObservedRunningTime="2026-01-26 16:28:23.430829341 +0000 UTC m=+1378.592101610" Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.416504 4680 generic.go:334] "Generic (PLEG): container finished" podID="ae15c480-9662-404f-9778-d4e130490ed0" containerID="3eb8e59b3fd0fbf140f16f4d3772294cc87951c074ae1294fdb00faac871d396" exitCode=0 Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.417593 4680 generic.go:334] "Generic (PLEG): container finished" podID="ae15c480-9662-404f-9778-d4e130490ed0" containerID="f371f65943070582d0ff47fa932bc7dbc200fed4fffb825a562252b5478021f7" exitCode=2 Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.416547 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerDied","Data":"3eb8e59b3fd0fbf140f16f4d3772294cc87951c074ae1294fdb00faac871d396"} Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.417740 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerDied","Data":"f371f65943070582d0ff47fa932bc7dbc200fed4fffb825a562252b5478021f7"} Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.417765 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerDied","Data":"0047181693e72593f82efa58bea18a1ad92ab1948c5f2b38aba7f4c5ca46f7f6"} Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.417689 4680 generic.go:334] "Generic (PLEG): container finished" podID="ae15c480-9662-404f-9778-d4e130490ed0" containerID="0047181693e72593f82efa58bea18a1ad92ab1948c5f2b38aba7f4c5ca46f7f6" exitCode=0 Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.417809 4680 generic.go:334] "Generic (PLEG): container finished" podID="ae15c480-9662-404f-9778-d4e130490ed0" containerID="e3827ba03ef26beb26a031201ae48069d62a537f32c8efac3a113500cd2f6abc" exitCode=0 Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.417839 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerDied","Data":"e3827ba03ef26beb26a031201ae48069d62a537f32c8efac3a113500cd2f6abc"} Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.808663 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934353 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-scripts\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934428 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-log-httpd\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934542 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-run-httpd\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934700 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-config-data\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934795 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-sg-core-conf-yaml\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934859 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d96w4\" (UniqueName: \"kubernetes.io/projected/ae15c480-9662-404f-9778-d4e130490ed0-kube-api-access-d96w4\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.934974 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-combined-ca-bundle\") pod \"ae15c480-9662-404f-9778-d4e130490ed0\" (UID: \"ae15c480-9662-404f-9778-d4e130490ed0\") " Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.935849 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.935863 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.941457 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-scripts" (OuterVolumeSpecName: "scripts") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.949519 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae15c480-9662-404f-9778-d4e130490ed0-kube-api-access-d96w4" (OuterVolumeSpecName: "kube-api-access-d96w4") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "kube-api-access-d96w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:28:24 crc kubenswrapper[4680]: I0126 16:28:24.971032 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.014113 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.038649 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.038889 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d96w4\" (UniqueName: \"kubernetes.io/projected/ae15c480-9662-404f-9778-d4e130490ed0-kube-api-access-d96w4\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.038977 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.039057 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.039275 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.039371 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae15c480-9662-404f-9778-d4e130490ed0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.051145 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-config-data" (OuterVolumeSpecName: "config-data") pod "ae15c480-9662-404f-9778-d4e130490ed0" (UID: "ae15c480-9662-404f-9778-d4e130490ed0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.141646 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae15c480-9662-404f-9778-d4e130490ed0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.431296 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae15c480-9662-404f-9778-d4e130490ed0","Type":"ContainerDied","Data":"e0ba7af2d0390471657e4cd6a3280cfc9e7c1a815c58e502aa5dca61159f402a"} Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.431349 4680 scope.go:117] "RemoveContainer" containerID="3eb8e59b3fd0fbf140f16f4d3772294cc87951c074ae1294fdb00faac871d396" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.431577 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.477756 4680 scope.go:117] "RemoveContainer" containerID="f371f65943070582d0ff47fa932bc7dbc200fed4fffb825a562252b5478021f7" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.507759 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.516984 4680 scope.go:117] "RemoveContainer" containerID="0047181693e72593f82efa58bea18a1ad92ab1948c5f2b38aba7f4c5ca46f7f6" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.521244 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.558284 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:25 crc kubenswrapper[4680]: E0126 16:28:25.558847 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="proxy-httpd" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.558873 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="proxy-httpd" Jan 26 16:28:25 crc kubenswrapper[4680]: E0126 16:28:25.558906 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-notification-agent" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.558915 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-notification-agent" Jan 26 16:28:25 crc kubenswrapper[4680]: E0126 16:28:25.558928 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-central-agent" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.558936 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-central-agent" Jan 26 16:28:25 crc kubenswrapper[4680]: E0126 16:28:25.558948 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="sg-core" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.558954 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="sg-core" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.559162 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="proxy-httpd" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.559188 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-central-agent" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.559206 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="sg-core" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.559219 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae15c480-9662-404f-9778-d4e130490ed0" containerName="ceilometer-notification-agent" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.561663 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.563819 4680 scope.go:117] "RemoveContainer" containerID="e3827ba03ef26beb26a031201ae48069d62a537f32c8efac3a113500cd2f6abc" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.564287 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.568565 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.574262 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664629 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-log-httpd\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664675 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-run-httpd\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664706 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-scripts\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664763 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-config-data\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664805 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664857 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.664889 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8fj\" (UniqueName: \"kubernetes.io/projected/d6422e03-8a4e-48fe-8413-c88c3d137eab-kube-api-access-qw8fj\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766159 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-log-httpd\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766208 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-run-httpd\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766238 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-scripts\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766282 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-config-data\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766305 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766349 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.766379 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8fj\" (UniqueName: \"kubernetes.io/projected/d6422e03-8a4e-48fe-8413-c88c3d137eab-kube-api-access-qw8fj\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.767105 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-log-httpd\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.767857 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-run-httpd\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.772175 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-scripts\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.772465 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-config-data\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.774110 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.774293 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.789341 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8fj\" (UniqueName: \"kubernetes.io/projected/d6422e03-8a4e-48fe-8413-c88c3d137eab-kube-api-access-qw8fj\") pod \"ceilometer-0\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " pod="openstack/ceilometer-0" Jan 26 16:28:25 crc kubenswrapper[4680]: I0126 16:28:25.888483 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:28:26 crc kubenswrapper[4680]: I0126 16:28:26.371786 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:28:26 crc kubenswrapper[4680]: W0126 16:28:26.378607 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6422e03_8a4e_48fe_8413_c88c3d137eab.slice/crio-a0ca72b5fa5d082d103695230fd2e4af0eafc102f86d0c97f51120d85ab5def2 WatchSource:0}: Error finding container a0ca72b5fa5d082d103695230fd2e4af0eafc102f86d0c97f51120d85ab5def2: Status 404 returned error can't find the container with id a0ca72b5fa5d082d103695230fd2e4af0eafc102f86d0c97f51120d85ab5def2 Jan 26 16:28:26 crc kubenswrapper[4680]: I0126 16:28:26.441360 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerStarted","Data":"a0ca72b5fa5d082d103695230fd2e4af0eafc102f86d0c97f51120d85ab5def2"} Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.182277 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae15c480-9662-404f-9778-d4e130490ed0" path="/var/lib/kubelet/pods/ae15c480-9662-404f-9778-d4e130490ed0/volumes" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.441674 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.454122 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerStarted","Data":"ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9"} Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.454163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerStarted","Data":"7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c"} Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.556740 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.633542 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tzmtt"] Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.635291 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.661237 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzmtt"] Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.713502 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-catalog-content\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.713568 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-utilities\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.713705 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tgmd\" (UniqueName: \"kubernetes.io/projected/3df331d1-8589-47bf-bf1e-29618751c2d3-kube-api-access-4tgmd\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.816484 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-catalog-content\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.816534 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-utilities\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.816634 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tgmd\" (UniqueName: \"kubernetes.io/projected/3df331d1-8589-47bf-bf1e-29618751c2d3-kube-api-access-4tgmd\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.817020 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-catalog-content\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.817266 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-utilities\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.853970 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tgmd\" (UniqueName: \"kubernetes.io/projected/3df331d1-8589-47bf-bf1e-29618751c2d3-kube-api-access-4tgmd\") pod \"redhat-operators-tzmtt\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:27 crc kubenswrapper[4680]: I0126 16:28:27.954927 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:28 crc kubenswrapper[4680]: I0126 16:28:28.466987 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerStarted","Data":"41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf"} Jan 26 16:28:28 crc kubenswrapper[4680]: I0126 16:28:28.472176 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzmtt"] Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.483524 4680 generic.go:334] "Generic (PLEG): container finished" podID="0092c6fa-128e-474b-b8d0-379592af1dc2" containerID="d185c77b64e6badf59bf2600c5dc6d637ba6d226bcff51418270e12c22a12ce5" exitCode=0 Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.484046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hj79q" event={"ID":"0092c6fa-128e-474b-b8d0-379592af1dc2","Type":"ContainerDied","Data":"d185c77b64e6badf59bf2600c5dc6d637ba6d226bcff51418270e12c22a12ce5"} Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.488000 4680 generic.go:334] "Generic (PLEG): container finished" podID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerID="39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588" exitCode=0 Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.488045 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerDied","Data":"39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588"} Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.488079 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerStarted","Data":"0b8e2626c2511ee91aa2c21c31b806e55121765bc34e2a8e9d251f70f7911db3"} Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.499536 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerStarted","Data":"ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d"} Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.500298 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:28:29 crc kubenswrapper[4680]: I0126 16:28:29.545300 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.976825559 podStartE2EDuration="4.54528244s" podCreationTimestamp="2026-01-26 16:28:25 +0000 UTC" firstStartedPulling="2026-01-26 16:28:26.380985912 +0000 UTC m=+1381.542258181" lastFinishedPulling="2026-01-26 16:28:28.949442793 +0000 UTC m=+1384.110715062" observedRunningTime="2026-01-26 16:28:29.543976563 +0000 UTC m=+1384.705248832" watchObservedRunningTime="2026-01-26 16:28:29.54528244 +0000 UTC m=+1384.706554709" Jan 26 16:28:29 crc kubenswrapper[4680]: E0126 16:28:29.883579 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15c480_9662_404f_9778_d4e130490ed0.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:28:30 crc kubenswrapper[4680]: I0126 16:28:30.175985 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:28:30 crc kubenswrapper[4680]: I0126 16:28:30.209664 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8657f7848d-ls2sv" Jan 26 16:28:30 crc kubenswrapper[4680]: I0126 16:28:30.374230 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c44b75754-m2rxl"] Jan 26 16:28:30 crc kubenswrapper[4680]: I0126 16:28:30.510002 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon-log" containerID="cri-o://6c572d5c665c3d2eb553f17cf6e76a99a3c23b8972469d1331f143a83b8bf254" gracePeriod=30 Jan 26 16:28:30 crc kubenswrapper[4680]: I0126 16:28:30.510529 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" containerID="cri-o://b4675cf261bba8b67434d4c8ef50209c8b8e949c2c7040b35f76bfb3fc7d8240" gracePeriod=30 Jan 26 16:28:30 crc kubenswrapper[4680]: I0126 16:28:30.975506 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.042236 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-config-data\") pod \"0092c6fa-128e-474b-b8d0-379592af1dc2\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.042465 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-scripts\") pod \"0092c6fa-128e-474b-b8d0-379592af1dc2\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.042501 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h77d\" (UniqueName: \"kubernetes.io/projected/0092c6fa-128e-474b-b8d0-379592af1dc2-kube-api-access-6h77d\") pod \"0092c6fa-128e-474b-b8d0-379592af1dc2\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.042573 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-combined-ca-bundle\") pod \"0092c6fa-128e-474b-b8d0-379592af1dc2\" (UID: \"0092c6fa-128e-474b-b8d0-379592af1dc2\") " Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.050595 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-scripts" (OuterVolumeSpecName: "scripts") pod "0092c6fa-128e-474b-b8d0-379592af1dc2" (UID: "0092c6fa-128e-474b-b8d0-379592af1dc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.067287 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0092c6fa-128e-474b-b8d0-379592af1dc2-kube-api-access-6h77d" (OuterVolumeSpecName: "kube-api-access-6h77d") pod "0092c6fa-128e-474b-b8d0-379592af1dc2" (UID: "0092c6fa-128e-474b-b8d0-379592af1dc2"). InnerVolumeSpecName "kube-api-access-6h77d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.129362 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-config-data" (OuterVolumeSpecName: "config-data") pod "0092c6fa-128e-474b-b8d0-379592af1dc2" (UID: "0092c6fa-128e-474b-b8d0-379592af1dc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.146668 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.146701 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.146713 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h77d\" (UniqueName: \"kubernetes.io/projected/0092c6fa-128e-474b-b8d0-379592af1dc2-kube-api-access-6h77d\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.158513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0092c6fa-128e-474b-b8d0-379592af1dc2" (UID: "0092c6fa-128e-474b-b8d0-379592af1dc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.249615 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0092c6fa-128e-474b-b8d0-379592af1dc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.519530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hj79q" event={"ID":"0092c6fa-128e-474b-b8d0-379592af1dc2","Type":"ContainerDied","Data":"81bb780866640cab00576c52eea7a8ce50dd55586cd7970be7737ae9bc9d7cf0"} Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.519599 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81bb780866640cab00576c52eea7a8ce50dd55586cd7970be7737ae9bc9d7cf0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.519552 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hj79q" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.521938 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerStarted","Data":"ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970"} Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.816766 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 16:28:31 crc kubenswrapper[4680]: E0126 16:28:31.817411 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0092c6fa-128e-474b-b8d0-379592af1dc2" containerName="nova-cell0-conductor-db-sync" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.817430 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0092c6fa-128e-474b-b8d0-379592af1dc2" containerName="nova-cell0-conductor-db-sync" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.817691 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0092c6fa-128e-474b-b8d0-379592af1dc2" containerName="nova-cell0-conductor-db-sync" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.818446 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.822962 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-29w67" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.823177 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.828838 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.879847 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hsn\" (UniqueName: \"kubernetes.io/projected/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-kube-api-access-v8hsn\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.879914 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.879997 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.982308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hsn\" (UniqueName: \"kubernetes.io/projected/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-kube-api-access-v8hsn\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.982377 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.982436 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.990047 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:31 crc kubenswrapper[4680]: I0126 16:28:31.990133 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:32 crc kubenswrapper[4680]: I0126 16:28:32.007030 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hsn\" (UniqueName: \"kubernetes.io/projected/87c6ff4a-48c0-4eac-9581-d2820b2b7f28-kube-api-access-v8hsn\") pod \"nova-cell0-conductor-0\" (UID: \"87c6ff4a-48c0-4eac-9581-d2820b2b7f28\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:32 crc kubenswrapper[4680]: I0126 16:28:32.134878 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:32 crc kubenswrapper[4680]: I0126 16:28:32.507374 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 16:28:32 crc kubenswrapper[4680]: I0126 16:28:32.538530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"87c6ff4a-48c0-4eac-9581-d2820b2b7f28","Type":"ContainerStarted","Data":"b218309baf88311ca8ed0e6c353fbc722b3c1cdf80332a71e8e9bcf945019950"} Jan 26 16:28:33 crc kubenswrapper[4680]: I0126 16:28:33.556506 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"87c6ff4a-48c0-4eac-9581-d2820b2b7f28","Type":"ContainerStarted","Data":"7167742d792bd67a5446a9848254f53696d622481971aa3af631101389e65798"} Jan 26 16:28:33 crc kubenswrapper[4680]: I0126 16:28:33.558207 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:33 crc kubenswrapper[4680]: I0126 16:28:33.588446 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.588427166 podStartE2EDuration="2.588427166s" podCreationTimestamp="2026-01-26 16:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:28:33.581651855 +0000 UTC m=+1388.742924124" watchObservedRunningTime="2026-01-26 16:28:33.588427166 +0000 UTC m=+1388.749699435" Jan 26 16:28:34 crc kubenswrapper[4680]: I0126 16:28:34.566078 4680 generic.go:334] "Generic (PLEG): container finished" podID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerID="b4675cf261bba8b67434d4c8ef50209c8b8e949c2c7040b35f76bfb3fc7d8240" exitCode=0 Jan 26 16:28:34 crc kubenswrapper[4680]: I0126 16:28:34.566100 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerDied","Data":"b4675cf261bba8b67434d4c8ef50209c8b8e949c2c7040b35f76bfb3fc7d8240"} Jan 26 16:28:34 crc kubenswrapper[4680]: I0126 16:28:34.566410 4680 scope.go:117] "RemoveContainer" containerID="d742cdda8a8dc8549e8d05b56d22ab900385caaaf38f775d5c30bb7cab6cfbcd" Jan 26 16:28:35 crc kubenswrapper[4680]: I0126 16:28:35.113279 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:28:35 crc kubenswrapper[4680]: I0126 16:28:35.580740 4680 generic.go:334] "Generic (PLEG): container finished" podID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerID="ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970" exitCode=0 Jan 26 16:28:35 crc kubenswrapper[4680]: I0126 16:28:35.580849 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerDied","Data":"ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970"} Jan 26 16:28:37 crc kubenswrapper[4680]: I0126 16:28:37.597875 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerStarted","Data":"d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92"} Jan 26 16:28:37 crc kubenswrapper[4680]: I0126 16:28:37.955625 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:37 crc kubenswrapper[4680]: I0126 16:28:37.955672 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:39 crc kubenswrapper[4680]: I0126 16:28:39.006084 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzmtt" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="registry-server" probeResult="failure" output=< Jan 26 16:28:39 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:28:39 crc kubenswrapper[4680]: > Jan 26 16:28:40 crc kubenswrapper[4680]: E0126 16:28:40.128206 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15c480_9662_404f_9778_d4e130490ed0.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.166320 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.193456 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tzmtt" podStartSLOduration=8.142109142 podStartE2EDuration="15.193432318s" podCreationTimestamp="2026-01-26 16:28:27 +0000 UTC" firstStartedPulling="2026-01-26 16:28:29.489540625 +0000 UTC m=+1384.650812894" lastFinishedPulling="2026-01-26 16:28:36.540863801 +0000 UTC m=+1391.702136070" observedRunningTime="2026-01-26 16:28:37.618680799 +0000 UTC m=+1392.779953068" watchObservedRunningTime="2026-01-26 16:28:42.193432318 +0000 UTC m=+1397.354704607" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.644123 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kn92r"] Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.645420 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.647700 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.647838 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.667171 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn92r"] Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.704245 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.704310 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-config-data\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.704344 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktzp\" (UniqueName: \"kubernetes.io/projected/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-kube-api-access-tktzp\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.704380 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-scripts\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.806329 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-config-data\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.806385 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktzp\" (UniqueName: \"kubernetes.io/projected/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-kube-api-access-tktzp\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.806445 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-scripts\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.806853 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.815696 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-config-data\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.816604 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-scripts\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.817117 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.860753 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktzp\" (UniqueName: \"kubernetes.io/projected/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-kube-api-access-tktzp\") pod \"nova-cell0-cell-mapping-kn92r\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.918201 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.919460 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.923969 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 16:28:42 crc kubenswrapper[4680]: I0126 16:28:42.978571 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.013580 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwrjv\" (UniqueName: \"kubernetes.io/projected/4a4b45ba-7122-4134-a15a-06b560fe6c4e-kube-api-access-zwrjv\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.013641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.013676 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-config-data\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.091554 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.125223 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwrjv\" (UniqueName: \"kubernetes.io/projected/4a4b45ba-7122-4134-a15a-06b560fe6c4e-kube-api-access-zwrjv\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.125297 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.125328 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-config-data\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.143157 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.145886 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-config-data\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.181793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwrjv\" (UniqueName: \"kubernetes.io/projected/4a4b45ba-7122-4134-a15a-06b560fe6c4e-kube-api-access-zwrjv\") pod \"nova-scheduler-0\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.186275 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.197189 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.211491 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.230442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.230533 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.230639 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vg5\" (UniqueName: \"kubernetes.io/projected/c26ddc5f-c890-49a0-b720-627df50abaaa-kube-api-access-76vg5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.271998 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.272038 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.273598 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.282862 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.303912 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.305638 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.313472 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.321845 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.358594 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.352126 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76vg5\" (UniqueName: \"kubernetes.io/projected/c26ddc5f-c890-49a0-b720-627df50abaaa-kube-api-access-76vg5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.363634 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45c10201-3368-42c5-8818-ea02aab4842f-logs\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.363724 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-config-data\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.363812 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.363893 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.363977 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-config-data\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.364219 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdsv\" (UniqueName: \"kubernetes.io/projected/15819343-28c9-4353-92ec-600a3e910bcb-kube-api-access-8xdsv\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.364319 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.364395 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.364728 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15819343-28c9-4353-92ec-600a3e910bcb-logs\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.364907 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/45c10201-3368-42c5-8818-ea02aab4842f-kube-api-access-x75gm\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.373794 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.397628 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.405116 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.441112 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76vg5\" (UniqueName: \"kubernetes.io/projected/c26ddc5f-c890-49a0-b720-627df50abaaa-kube-api-access-76vg5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.470621 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdsv\" (UniqueName: \"kubernetes.io/projected/15819343-28c9-4353-92ec-600a3e910bcb-kube-api-access-8xdsv\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.470665 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.470719 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15819343-28c9-4353-92ec-600a3e910bcb-logs\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.470785 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/45c10201-3368-42c5-8818-ea02aab4842f-kube-api-access-x75gm\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.470913 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45c10201-3368-42c5-8818-ea02aab4842f-logs\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.470938 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-config-data\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.471011 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.471026 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-config-data\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.473341 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45c10201-3368-42c5-8818-ea02aab4842f-logs\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.473709 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15819343-28c9-4353-92ec-600a3e910bcb-logs\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.495060 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-config-data\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.499768 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.500501 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-config-data\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.503153 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.506032 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/45c10201-3368-42c5-8818-ea02aab4842f-kube-api-access-x75gm\") pod \"nova-api-0\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.515092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdsv\" (UniqueName: \"kubernetes.io/projected/15819343-28c9-4353-92ec-600a3e910bcb-kube-api-access-8xdsv\") pod \"nova-metadata-0\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.526660 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5475b7678f-tjq4z"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.528304 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.535910 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5475b7678f-tjq4z"] Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.579577 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-config\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.579637 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxrg\" (UniqueName: \"kubernetes.io/projected/e6b7a2ed-0d0a-4585-8684-fd407666efa9-kube-api-access-xdxrg\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.579682 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-sb\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.579754 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-nb\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.579794 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-swift-storage-0\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.579819 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-svc\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.625585 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.640047 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.660355 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.683819 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdxrg\" (UniqueName: \"kubernetes.io/projected/e6b7a2ed-0d0a-4585-8684-fd407666efa9-kube-api-access-xdxrg\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.683882 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-sb\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.683958 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-nb\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.684004 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-swift-storage-0\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.684029 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-svc\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.684077 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-config\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.685193 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-config\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.685371 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-nb\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.685734 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-swift-storage-0\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.686835 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-sb\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.693284 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-svc\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.707620 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdxrg\" (UniqueName: \"kubernetes.io/projected/e6b7a2ed-0d0a-4585-8684-fd407666efa9-kube-api-access-xdxrg\") pod \"dnsmasq-dns-5475b7678f-tjq4z\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.855412 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:43 crc kubenswrapper[4680]: I0126 16:28:43.997826 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn92r"] Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.241925 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.700461 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a4b45ba-7122-4134-a15a-06b560fe6c4e","Type":"ContainerStarted","Data":"ff84466368422cc1b31951da3623bef4aa91eaf860310729becf4a8d8e181f31"} Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.710179 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn92r" event={"ID":"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6","Type":"ContainerStarted","Data":"c5dfdfd0db4d5b1c650aff2fea418a3933e50e1acd32ecfe36bc8ce9a9e7e648"} Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.710223 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn92r" event={"ID":"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6","Type":"ContainerStarted","Data":"bc7e28898fec53144036aac1ed367b9f2b00dff94ffb7361d3b4da163b15b2e6"} Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.742910 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kn92r" podStartSLOduration=2.742890373 podStartE2EDuration="2.742890373s" podCreationTimestamp="2026-01-26 16:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:28:44.735240247 +0000 UTC m=+1399.896512516" watchObservedRunningTime="2026-01-26 16:28:44.742890373 +0000 UTC m=+1399.904162642" Jan 26 16:28:44 crc kubenswrapper[4680]: W0126 16:28:44.800562 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26ddc5f_c890_49a0_b720_627df50abaaa.slice/crio-1b4e8f0796ac0715b7571189aea9683b969a7b099810dae1b70ebd7ef8c51fb3 WatchSource:0}: Error finding container 1b4e8f0796ac0715b7571189aea9683b969a7b099810dae1b70ebd7ef8c51fb3: Status 404 returned error can't find the container with id 1b4e8f0796ac0715b7571189aea9683b969a7b099810dae1b70ebd7ef8c51fb3 Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.816905 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.837480 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:28:44 crc kubenswrapper[4680]: I0126 16:28:44.879018 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.002686 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5475b7678f-tjq4z"] Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.115006 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.445440 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4br2w"] Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.446651 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.451231 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.451794 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.469300 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkt8\" (UniqueName: \"kubernetes.io/projected/1fa03558-d4dc-4769-946d-c017e5d8d767-kube-api-access-slkt8\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.469343 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-scripts\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.469370 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.469400 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-config-data\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.483921 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4br2w"] Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.571541 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkt8\" (UniqueName: \"kubernetes.io/projected/1fa03558-d4dc-4769-946d-c017e5d8d767-kube-api-access-slkt8\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.571585 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-scripts\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.571612 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.571644 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-config-data\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.581185 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-scripts\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.581380 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.581850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-config-data\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.591223 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkt8\" (UniqueName: \"kubernetes.io/projected/1fa03558-d4dc-4769-946d-c017e5d8d767-kube-api-access-slkt8\") pod \"nova-cell1-conductor-db-sync-4br2w\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.726180 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15819343-28c9-4353-92ec-600a3e910bcb","Type":"ContainerStarted","Data":"a541ffe87010d8a13c68121fbe9890746efca5adb5270f839c1e227bd50f138f"} Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.731136 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45c10201-3368-42c5-8818-ea02aab4842f","Type":"ContainerStarted","Data":"5ca9cb898b2afbefd850eb0db20e32393c359fa1e11bbe54eb6364619724e6f3"} Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.733934 4680 generic.go:334] "Generic (PLEG): container finished" podID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerID="c251b671eb4b7cfba182b9435b8ebde52f9c2c60863138b5a699809b6bd45c46" exitCode=0 Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.733977 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" event={"ID":"e6b7a2ed-0d0a-4585-8684-fd407666efa9","Type":"ContainerDied","Data":"c251b671eb4b7cfba182b9435b8ebde52f9c2c60863138b5a699809b6bd45c46"} Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.733995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" event={"ID":"e6b7a2ed-0d0a-4585-8684-fd407666efa9","Type":"ContainerStarted","Data":"6ba9b58f730a3b236ec3290bffa2fc6cc2c9c8e60200f6db39ac3adea2430669"} Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.746629 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c26ddc5f-c890-49a0-b720-627df50abaaa","Type":"ContainerStarted","Data":"1b4e8f0796ac0715b7571189aea9683b969a7b099810dae1b70ebd7ef8c51fb3"} Jan 26 16:28:45 crc kubenswrapper[4680]: I0126 16:28:45.805894 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.292644 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4br2w"] Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.759753 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4br2w" event={"ID":"1fa03558-d4dc-4769-946d-c017e5d8d767","Type":"ContainerStarted","Data":"6ddd3353c4af6ca9caea26a23c4fd18b0caf03d942fd6e00b8d1e5c71821903f"} Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.763224 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" event={"ID":"e6b7a2ed-0d0a-4585-8684-fd407666efa9","Type":"ContainerStarted","Data":"af29f1d58f380e28aef2d0c7b19edf5065e115b061ba4684eddeabd074429adf"} Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.763348 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.803774 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" podStartSLOduration=3.803754391 podStartE2EDuration="3.803754391s" podCreationTimestamp="2026-01-26 16:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:28:46.794916871 +0000 UTC m=+1401.956189140" watchObservedRunningTime="2026-01-26 16:28:46.803754391 +0000 UTC m=+1401.965026660" Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.981327 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:28:46 crc kubenswrapper[4680]: I0126 16:28:46.981715 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:28:48 crc kubenswrapper[4680]: I0126 16:28:48.025117 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:28:48 crc kubenswrapper[4680]: I0126 16:28:48.039556 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:28:48 crc kubenswrapper[4680]: I0126 16:28:48.783323 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4br2w" event={"ID":"1fa03558-d4dc-4769-946d-c017e5d8d767","Type":"ContainerStarted","Data":"cf31bc85f65133f78585954f3ee54886542add33746b0da24a7e5a7549a743c0"} Jan 26 16:28:48 crc kubenswrapper[4680]: I0126 16:28:48.803604 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4br2w" podStartSLOduration=3.803576585 podStartE2EDuration="3.803576585s" podCreationTimestamp="2026-01-26 16:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:28:48.800695784 +0000 UTC m=+1403.961968053" watchObservedRunningTime="2026-01-26 16:28:48.803576585 +0000 UTC m=+1403.964848854" Jan 26 16:28:49 crc kubenswrapper[4680]: I0126 16:28:49.041636 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzmtt" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="registry-server" probeResult="failure" output=< Jan 26 16:28:49 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:28:49 crc kubenswrapper[4680]: > Jan 26 16:28:50 crc kubenswrapper[4680]: E0126 16:28:50.478185 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15c480_9662_404f_9778_d4e130490ed0.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.805219 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a4b45ba-7122-4134-a15a-06b560fe6c4e","Type":"ContainerStarted","Data":"32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0"} Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.807905 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15819343-28c9-4353-92ec-600a3e910bcb","Type":"ContainerStarted","Data":"30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1"} Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.807952 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15819343-28c9-4353-92ec-600a3e910bcb","Type":"ContainerStarted","Data":"9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf"} Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.808105 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-log" containerID="cri-o://9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf" gracePeriod=30 Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.808396 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-metadata" containerID="cri-o://30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1" gracePeriod=30 Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.812735 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45c10201-3368-42c5-8818-ea02aab4842f","Type":"ContainerStarted","Data":"2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb"} Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.812793 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45c10201-3368-42c5-8818-ea02aab4842f","Type":"ContainerStarted","Data":"ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642"} Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.815674 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c26ddc5f-c890-49a0-b720-627df50abaaa","Type":"ContainerStarted","Data":"03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51"} Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.815813 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c26ddc5f-c890-49a0-b720-627df50abaaa" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51" gracePeriod=30 Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.833980 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.041996469 podStartE2EDuration="8.833955692s" podCreationTimestamp="2026-01-26 16:28:42 +0000 UTC" firstStartedPulling="2026-01-26 16:28:44.256676482 +0000 UTC m=+1399.417948751" lastFinishedPulling="2026-01-26 16:28:50.048635705 +0000 UTC m=+1405.209907974" observedRunningTime="2026-01-26 16:28:50.82715735 +0000 UTC m=+1405.988429619" watchObservedRunningTime="2026-01-26 16:28:50.833955692 +0000 UTC m=+1405.995227961" Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.878858 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.6423427630000003 podStartE2EDuration="8.878826349s" podCreationTimestamp="2026-01-26 16:28:42 +0000 UTC" firstStartedPulling="2026-01-26 16:28:44.813447726 +0000 UTC m=+1399.974719995" lastFinishedPulling="2026-01-26 16:28:50.049931312 +0000 UTC m=+1405.211203581" observedRunningTime="2026-01-26 16:28:50.853463273 +0000 UTC m=+1406.014735542" watchObservedRunningTime="2026-01-26 16:28:50.878826349 +0000 UTC m=+1406.040098608" Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.894573 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7212529 podStartE2EDuration="7.894553873s" podCreationTimestamp="2026-01-26 16:28:43 +0000 UTC" firstStartedPulling="2026-01-26 16:28:44.873636455 +0000 UTC m=+1400.034908724" lastFinishedPulling="2026-01-26 16:28:50.046937428 +0000 UTC m=+1405.208209697" observedRunningTime="2026-01-26 16:28:50.871573764 +0000 UTC m=+1406.032846023" watchObservedRunningTime="2026-01-26 16:28:50.894553873 +0000 UTC m=+1406.055826142" Jan 26 16:28:50 crc kubenswrapper[4680]: I0126 16:28:50.932831 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.82003673 podStartE2EDuration="7.932806283s" podCreationTimestamp="2026-01-26 16:28:43 +0000 UTC" firstStartedPulling="2026-01-26 16:28:44.934404271 +0000 UTC m=+1400.095676540" lastFinishedPulling="2026-01-26 16:28:50.047173824 +0000 UTC m=+1405.208446093" observedRunningTime="2026-01-26 16:28:50.90186606 +0000 UTC m=+1406.063138329" watchObservedRunningTime="2026-01-26 16:28:50.932806283 +0000 UTC m=+1406.094078552" Jan 26 16:28:51 crc kubenswrapper[4680]: I0126 16:28:51.845641 4680 generic.go:334] "Generic (PLEG): container finished" podID="15819343-28c9-4353-92ec-600a3e910bcb" containerID="9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf" exitCode=143 Jan 26 16:28:51 crc kubenswrapper[4680]: I0126 16:28:51.845713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15819343-28c9-4353-92ec-600a3e910bcb","Type":"ContainerDied","Data":"9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf"} Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.359688 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.360098 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.387234 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.626797 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.641353 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.641399 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.660929 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.660974 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.858249 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.929287 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78b86675f-bmh7k"] Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.931255 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="dnsmasq-dns" containerID="cri-o://46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545" gracePeriod=10 Jan 26 16:28:53 crc kubenswrapper[4680]: I0126 16:28:53.941631 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.593034 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.675944 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plbrc\" (UniqueName: \"kubernetes.io/projected/9599857d-051f-4a93-8c81-af5e73f5e087-kube-api-access-plbrc\") pod \"9599857d-051f-4a93-8c81-af5e73f5e087\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.676990 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-svc\") pod \"9599857d-051f-4a93-8c81-af5e73f5e087\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.677145 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-sb\") pod \"9599857d-051f-4a93-8c81-af5e73f5e087\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.677168 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-config\") pod \"9599857d-051f-4a93-8c81-af5e73f5e087\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.677201 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-swift-storage-0\") pod \"9599857d-051f-4a93-8c81-af5e73f5e087\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.677221 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-nb\") pod \"9599857d-051f-4a93-8c81-af5e73f5e087\" (UID: \"9599857d-051f-4a93-8c81-af5e73f5e087\") " Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.711766 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9599857d-051f-4a93-8c81-af5e73f5e087-kube-api-access-plbrc" (OuterVolumeSpecName: "kube-api-access-plbrc") pod "9599857d-051f-4a93-8c81-af5e73f5e087" (UID: "9599857d-051f-4a93-8c81-af5e73f5e087"). InnerVolumeSpecName "kube-api-access-plbrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.737656 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.737718 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.778680 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9599857d-051f-4a93-8c81-af5e73f5e087" (UID: "9599857d-051f-4a93-8c81-af5e73f5e087"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.780497 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.780781 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plbrc\" (UniqueName: \"kubernetes.io/projected/9599857d-051f-4a93-8c81-af5e73f5e087-kube-api-access-plbrc\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.811296 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-config" (OuterVolumeSpecName: "config") pod "9599857d-051f-4a93-8c81-af5e73f5e087" (UID: "9599857d-051f-4a93-8c81-af5e73f5e087"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.835846 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9599857d-051f-4a93-8c81-af5e73f5e087" (UID: "9599857d-051f-4a93-8c81-af5e73f5e087"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.856912 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9599857d-051f-4a93-8c81-af5e73f5e087" (UID: "9599857d-051f-4a93-8c81-af5e73f5e087"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.858938 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9599857d-051f-4a93-8c81-af5e73f5e087" (UID: "9599857d-051f-4a93-8c81-af5e73f5e087"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.893389 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.893429 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.893439 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.893454 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9599857d-051f-4a93-8c81-af5e73f5e087-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.897905 4680 generic.go:334] "Generic (PLEG): container finished" podID="9599857d-051f-4a93-8c81-af5e73f5e087" containerID="46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545" exitCode=0 Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.899719 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.899712 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" event={"ID":"9599857d-051f-4a93-8c81-af5e73f5e087","Type":"ContainerDied","Data":"46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545"} Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.899930 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" event={"ID":"9599857d-051f-4a93-8c81-af5e73f5e087","Type":"ContainerDied","Data":"1b91fdc8a3e92b2d3befa90d596138f74f8ebb6dcff64d8b4bd7bd194c8a71fd"} Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.900019 4680 scope.go:117] "RemoveContainer" containerID="46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.926857 4680 scope.go:117] "RemoveContainer" containerID="5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.975444 4680 scope.go:117] "RemoveContainer" containerID="46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545" Jan 26 16:28:54 crc kubenswrapper[4680]: E0126 16:28:54.976175 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545\": container with ID starting with 46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545 not found: ID does not exist" containerID="46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.976217 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545"} err="failed to get container status \"46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545\": rpc error: code = NotFound desc = could not find container \"46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545\": container with ID starting with 46faffb0589973c69ba57a97a4b84e25d399dc4e34557cbb34b556700cbea545 not found: ID does not exist" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.976245 4680 scope.go:117] "RemoveContainer" containerID="5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f" Jan 26 16:28:54 crc kubenswrapper[4680]: E0126 16:28:54.976465 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f\": container with ID starting with 5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f not found: ID does not exist" containerID="5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.976484 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f"} err="failed to get container status \"5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f\": rpc error: code = NotFound desc = could not find container \"5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f\": container with ID starting with 5155d48fbe82073b05cba05d087afbfd41de5d67cf0aa412d11f01ce522f965f not found: ID does not exist" Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.978707 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78b86675f-bmh7k"] Jan 26 16:28:54 crc kubenswrapper[4680]: I0126 16:28:54.987751 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78b86675f-bmh7k"] Jan 26 16:28:55 crc kubenswrapper[4680]: I0126 16:28:55.113811 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c44b75754-m2rxl" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 26 16:28:55 crc kubenswrapper[4680]: I0126 16:28:55.114126 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:28:55 crc kubenswrapper[4680]: I0126 16:28:55.181827 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" path="/var/lib/kubelet/pods/9599857d-051f-4a93-8c81-af5e73f5e087/volumes" Jan 26 16:28:55 crc kubenswrapper[4680]: I0126 16:28:55.913187 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:28:56 crc kubenswrapper[4680]: I0126 16:28:56.918784 4680 generic.go:334] "Generic (PLEG): container finished" podID="e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" containerID="c5dfdfd0db4d5b1c650aff2fea418a3933e50e1acd32ecfe36bc8ce9a9e7e648" exitCode=0 Jan 26 16:28:56 crc kubenswrapper[4680]: I0126 16:28:56.918851 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn92r" event={"ID":"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6","Type":"ContainerDied","Data":"c5dfdfd0db4d5b1c650aff2fea418a3933e50e1acd32ecfe36bc8ce9a9e7e648"} Jan 26 16:28:57 crc kubenswrapper[4680]: I0126 16:28:57.930343 4680 generic.go:334] "Generic (PLEG): container finished" podID="1fa03558-d4dc-4769-946d-c017e5d8d767" containerID="cf31bc85f65133f78585954f3ee54886542add33746b0da24a7e5a7549a743c0" exitCode=0 Jan 26 16:28:57 crc kubenswrapper[4680]: I0126 16:28:57.930427 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4br2w" event={"ID":"1fa03558-d4dc-4769-946d-c017e5d8d767","Type":"ContainerDied","Data":"cf31bc85f65133f78585954f3ee54886542add33746b0da24a7e5a7549a743c0"} Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.004810 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.063263 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.324210 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.399931 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-combined-ca-bundle\") pod \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.400208 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-scripts\") pod \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.400352 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-config-data\") pod \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.400443 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktzp\" (UniqueName: \"kubernetes.io/projected/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-kube-api-access-tktzp\") pod \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\" (UID: \"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6\") " Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.407358 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-scripts" (OuterVolumeSpecName: "scripts") pod "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" (UID: "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.408029 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-kube-api-access-tktzp" (OuterVolumeSpecName: "kube-api-access-tktzp") pod "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" (UID: "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6"). InnerVolumeSpecName "kube-api-access-tktzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.434496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-config-data" (OuterVolumeSpecName: "config-data") pod "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" (UID: "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.436554 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" (UID: "e53297ff-fd86-4e5a-8e8b-c1c44c9118b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.502882 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.503255 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktzp\" (UniqueName: \"kubernetes.io/projected/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-kube-api-access-tktzp\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.503361 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.503445 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.854101 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzmtt"] Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.942009 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn92r" Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.942049 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn92r" event={"ID":"e53297ff-fd86-4e5a-8e8b-c1c44c9118b6","Type":"ContainerDied","Data":"bc7e28898fec53144036aac1ed367b9f2b00dff94ffb7361d3b4da163b15b2e6"} Jan 26 16:28:58 crc kubenswrapper[4680]: I0126 16:28:58.943355 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7e28898fec53144036aac1ed367b9f2b00dff94ffb7361d3b4da163b15b2e6" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.108177 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.110037 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-api" containerID="cri-o://2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb" gracePeriod=30 Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.110188 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-log" containerID="cri-o://ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642" gracePeriod=30 Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.126528 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.126707 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" containerName="nova-scheduler-scheduler" containerID="cri-o://32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" gracePeriod=30 Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.407606 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.414086 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78b86675f-bmh7k" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.536172 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-config-data\") pod \"1fa03558-d4dc-4769-946d-c017e5d8d767\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.536318 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-scripts\") pod \"1fa03558-d4dc-4769-946d-c017e5d8d767\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.536424 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-combined-ca-bundle\") pod \"1fa03558-d4dc-4769-946d-c017e5d8d767\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.536453 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slkt8\" (UniqueName: \"kubernetes.io/projected/1fa03558-d4dc-4769-946d-c017e5d8d767-kube-api-access-slkt8\") pod \"1fa03558-d4dc-4769-946d-c017e5d8d767\" (UID: \"1fa03558-d4dc-4769-946d-c017e5d8d767\") " Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.542974 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-scripts" (OuterVolumeSpecName: "scripts") pod "1fa03558-d4dc-4769-946d-c017e5d8d767" (UID: "1fa03558-d4dc-4769-946d-c017e5d8d767"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.557409 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa03558-d4dc-4769-946d-c017e5d8d767-kube-api-access-slkt8" (OuterVolumeSpecName: "kube-api-access-slkt8") pod "1fa03558-d4dc-4769-946d-c017e5d8d767" (UID: "1fa03558-d4dc-4769-946d-c017e5d8d767"). InnerVolumeSpecName "kube-api-access-slkt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.576514 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fa03558-d4dc-4769-946d-c017e5d8d767" (UID: "1fa03558-d4dc-4769-946d-c017e5d8d767"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.579405 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-config-data" (OuterVolumeSpecName: "config-data") pod "1fa03558-d4dc-4769-946d-c017e5d8d767" (UID: "1fa03558-d4dc-4769-946d-c017e5d8d767"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.638669 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.638707 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.638722 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slkt8\" (UniqueName: \"kubernetes.io/projected/1fa03558-d4dc-4769-946d-c017e5d8d767-kube-api-access-slkt8\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.638734 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa03558-d4dc-4769-946d-c017e5d8d767-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.951183 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4br2w" event={"ID":"1fa03558-d4dc-4769-946d-c017e5d8d767","Type":"ContainerDied","Data":"6ddd3353c4af6ca9caea26a23c4fd18b0caf03d942fd6e00b8d1e5c71821903f"} Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.952017 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ddd3353c4af6ca9caea26a23c4fd18b0caf03d942fd6e00b8d1e5c71821903f" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.951348 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4br2w" Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.952948 4680 generic.go:334] "Generic (PLEG): container finished" podID="45c10201-3368-42c5-8818-ea02aab4842f" containerID="ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642" exitCode=143 Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.952987 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45c10201-3368-42c5-8818-ea02aab4842f","Type":"ContainerDied","Data":"ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642"} Jan 26 16:28:59 crc kubenswrapper[4680]: I0126 16:28:59.953231 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tzmtt" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="registry-server" containerID="cri-o://d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92" gracePeriod=2 Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.041845 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 16:29:00 crc kubenswrapper[4680]: E0126 16:29:00.042406 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" containerName="nova-manage" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.042491 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" containerName="nova-manage" Jan 26 16:29:00 crc kubenswrapper[4680]: E0126 16:29:00.042555 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="dnsmasq-dns" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.042615 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="dnsmasq-dns" Jan 26 16:29:00 crc kubenswrapper[4680]: E0126 16:29:00.042678 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="init" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.042735 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="init" Jan 26 16:29:00 crc kubenswrapper[4680]: E0126 16:29:00.042793 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa03558-d4dc-4769-946d-c017e5d8d767" containerName="nova-cell1-conductor-db-sync" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.042851 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa03558-d4dc-4769-946d-c017e5d8d767" containerName="nova-cell1-conductor-db-sync" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.043108 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa03558-d4dc-4769-946d-c017e5d8d767" containerName="nova-cell1-conductor-db-sync" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.043183 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" containerName="nova-manage" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.043243 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9599857d-051f-4a93-8c81-af5e73f5e087" containerName="dnsmasq-dns" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.043950 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.045168 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7877ac8d-e432-4839-a322-a7625dbee0e2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.045340 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7877ac8d-e432-4839-a322-a7625dbee0e2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.045417 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrkcs\" (UniqueName: \"kubernetes.io/projected/7877ac8d-e432-4839-a322-a7625dbee0e2-kube-api-access-hrkcs\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.046715 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.058601 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.148559 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7877ac8d-e432-4839-a322-a7625dbee0e2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.148910 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrkcs\" (UniqueName: \"kubernetes.io/projected/7877ac8d-e432-4839-a322-a7625dbee0e2-kube-api-access-hrkcs\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.149112 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7877ac8d-e432-4839-a322-a7625dbee0e2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.154176 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7877ac8d-e432-4839-a322-a7625dbee0e2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.154604 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7877ac8d-e432-4839-a322-a7625dbee0e2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.166709 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrkcs\" (UniqueName: \"kubernetes.io/projected/7877ac8d-e432-4839-a322-a7625dbee0e2-kube-api-access-hrkcs\") pod \"nova-cell1-conductor-0\" (UID: \"7877ac8d-e432-4839-a322-a7625dbee0e2\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.180848 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.540326 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.540519 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" containerName="kube-state-metrics" containerID="cri-o://5209e212e2204abe1f8ecc4fac4c6eb44ed5c43062b98e5c67f2eb3e5e3e6da0" gracePeriod=30 Jan 26 16:29:00 crc kubenswrapper[4680]: W0126 16:29:00.640706 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7877ac8d_e432_4839_a322_a7625dbee0e2.slice/crio-d79274e10269487f562d02a8da83094bff56f977cc2cb7d9e8fc4bd4fe95e574 WatchSource:0}: Error finding container d79274e10269487f562d02a8da83094bff56f977cc2cb7d9e8fc4bd4fe95e574: Status 404 returned error can't find the container with id d79274e10269487f562d02a8da83094bff56f977cc2cb7d9e8fc4bd4fe95e574 Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.652562 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 16:29:00 crc kubenswrapper[4680]: E0126 16:29:00.811338 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15c480_9662_404f_9778_d4e130490ed0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22f475ff_bcba_4cdd_a6ed_62be26882b42.slice/crio-5209e212e2204abe1f8ecc4fac4c6eb44ed5c43062b98e5c67f2eb3e5e3e6da0.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.830839 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.973725 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-catalog-content\") pod \"3df331d1-8589-47bf-bf1e-29618751c2d3\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.973815 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tgmd\" (UniqueName: \"kubernetes.io/projected/3df331d1-8589-47bf-bf1e-29618751c2d3-kube-api-access-4tgmd\") pod \"3df331d1-8589-47bf-bf1e-29618751c2d3\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.974003 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-utilities\") pod \"3df331d1-8589-47bf-bf1e-29618751c2d3\" (UID: \"3df331d1-8589-47bf-bf1e-29618751c2d3\") " Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.978898 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-utilities" (OuterVolumeSpecName: "utilities") pod "3df331d1-8589-47bf-bf1e-29618751c2d3" (UID: "3df331d1-8589-47bf-bf1e-29618751c2d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.979258 4680 generic.go:334] "Generic (PLEG): container finished" podID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerID="d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92" exitCode=0 Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.979303 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzmtt" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.979321 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerDied","Data":"d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92"} Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.980283 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzmtt" event={"ID":"3df331d1-8589-47bf-bf1e-29618751c2d3","Type":"ContainerDied","Data":"0b8e2626c2511ee91aa2c21c31b806e55121765bc34e2a8e9d251f70f7911db3"} Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.980300 4680 scope.go:117] "RemoveContainer" containerID="d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.984441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7877ac8d-e432-4839-a322-a7625dbee0e2","Type":"ContainerStarted","Data":"d79274e10269487f562d02a8da83094bff56f977cc2cb7d9e8fc4bd4fe95e574"} Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.985441 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df331d1-8589-47bf-bf1e-29618751c2d3-kube-api-access-4tgmd" (OuterVolumeSpecName: "kube-api-access-4tgmd") pod "3df331d1-8589-47bf-bf1e-29618751c2d3" (UID: "3df331d1-8589-47bf-bf1e-29618751c2d3"). InnerVolumeSpecName "kube-api-access-4tgmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.992536 4680 generic.go:334] "Generic (PLEG): container finished" podID="22f475ff-bcba-4cdd-a6ed-62be26882b42" containerID="5209e212e2204abe1f8ecc4fac4c6eb44ed5c43062b98e5c67f2eb3e5e3e6da0" exitCode=2 Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.992665 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22f475ff-bcba-4cdd-a6ed-62be26882b42","Type":"ContainerDied","Data":"5209e212e2204abe1f8ecc4fac4c6eb44ed5c43062b98e5c67f2eb3e5e3e6da0"} Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.997747 4680 generic.go:334] "Generic (PLEG): container finished" podID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerID="6c572d5c665c3d2eb553f17cf6e76a99a3c23b8972469d1331f143a83b8bf254" exitCode=137 Jan 26 16:29:00 crc kubenswrapper[4680]: I0126 16:29:00.997795 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerDied","Data":"6c572d5c665c3d2eb553f17cf6e76a99a3c23b8972469d1331f143a83b8bf254"} Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.017108 4680 scope.go:117] "RemoveContainer" containerID="ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.053044 4680 scope.go:117] "RemoveContainer" containerID="39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.076923 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tgmd\" (UniqueName: \"kubernetes.io/projected/3df331d1-8589-47bf-bf1e-29618751c2d3-kube-api-access-4tgmd\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.076957 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.125883 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3df331d1-8589-47bf-bf1e-29618751c2d3" (UID: "3df331d1-8589-47bf-bf1e-29618751c2d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.179241 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df331d1-8589-47bf-bf1e-29618751c2d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.246805 4680 scope.go:117] "RemoveContainer" containerID="d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92" Jan 26 16:29:01 crc kubenswrapper[4680]: E0126 16:29:01.247380 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92\": container with ID starting with d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92 not found: ID does not exist" containerID="d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.247441 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92"} err="failed to get container status \"d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92\": rpc error: code = NotFound desc = could not find container \"d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92\": container with ID starting with d08d2d491c09a9c5a4e9eeb5e118d78e45c403beb2fac0d8717d455097d22a92 not found: ID does not exist" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.247461 4680 scope.go:117] "RemoveContainer" containerID="ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970" Jan 26 16:29:01 crc kubenswrapper[4680]: E0126 16:29:01.247790 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970\": container with ID starting with ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970 not found: ID does not exist" containerID="ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.247807 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970"} err="failed to get container status \"ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970\": rpc error: code = NotFound desc = could not find container \"ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970\": container with ID starting with ab7253991f82eae4d4658a1fcc2719f7e3dd5017c54da1beb0da3efa0cadf970 not found: ID does not exist" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.247819 4680 scope.go:117] "RemoveContainer" containerID="39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588" Jan 26 16:29:01 crc kubenswrapper[4680]: E0126 16:29:01.249244 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588\": container with ID starting with 39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588 not found: ID does not exist" containerID="39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.249264 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588"} err="failed to get container status \"39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588\": rpc error: code = NotFound desc = could not find container \"39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588\": container with ID starting with 39fa1b4b055abecb3c2cf704ba880d93c0037c3cb6c1c002614428cdf9435588 not found: ID does not exist" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.383483 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzmtt"] Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.398561 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tzmtt"] Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.543233 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698299 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-config-data\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698342 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-combined-ca-bundle\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698407 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-tls-certs\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698456 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txr7t\" (UniqueName: \"kubernetes.io/projected/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-kube-api-access-txr7t\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698477 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-scripts\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698502 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-secret-key\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.698557 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-logs\") pod \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\" (UID: \"c61b97a0-f2b3-4935-a1a0-d6e3484410e5\") " Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.700770 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-logs" (OuterVolumeSpecName: "logs") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.739463 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-kube-api-access-txr7t" (OuterVolumeSpecName: "kube-api-access-txr7t") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "kube-api-access-txr7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.748853 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-config-data" (OuterVolumeSpecName: "config-data") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.752575 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.799828 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.800488 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txr7t\" (UniqueName: \"kubernetes.io/projected/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-kube-api-access-txr7t\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.800518 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.800528 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.800539 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.800548 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.825836 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-scripts" (OuterVolumeSpecName: "scripts") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.832515 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.836653 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c61b97a0-f2b3-4935-a1a0-d6e3484410e5" (UID: "c61b97a0-f2b3-4935-a1a0-d6e3484410e5"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.901789 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:01 crc kubenswrapper[4680]: I0126 16:29:01.901822 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c61b97a0-f2b3-4935-a1a0-d6e3484410e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.002744 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct57d\" (UniqueName: \"kubernetes.io/projected/22f475ff-bcba-4cdd-a6ed-62be26882b42-kube-api-access-ct57d\") pod \"22f475ff-bcba-4cdd-a6ed-62be26882b42\" (UID: \"22f475ff-bcba-4cdd-a6ed-62be26882b42\") " Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.010747 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f475ff-bcba-4cdd-a6ed-62be26882b42-kube-api-access-ct57d" (OuterVolumeSpecName: "kube-api-access-ct57d") pod "22f475ff-bcba-4cdd-a6ed-62be26882b42" (UID: "22f475ff-bcba-4cdd-a6ed-62be26882b42"). InnerVolumeSpecName "kube-api-access-ct57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.011572 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c44b75754-m2rxl" event={"ID":"c61b97a0-f2b3-4935-a1a0-d6e3484410e5","Type":"ContainerDied","Data":"6e0095960c7b7264edce513bc3662f3e3cc0d701d556d48d02acd0c35149cd19"} Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.011618 4680 scope.go:117] "RemoveContainer" containerID="b4675cf261bba8b67434d4c8ef50209c8b8e949c2c7040b35f76bfb3fc7d8240" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.011711 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c44b75754-m2rxl" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.026894 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7877ac8d-e432-4839-a322-a7625dbee0e2","Type":"ContainerStarted","Data":"ec0ad2923aa796b63479478ead9ade899267ad6330dc1ae2753364b4dfe9a3df"} Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.027277 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.040399 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22f475ff-bcba-4cdd-a6ed-62be26882b42","Type":"ContainerDied","Data":"fb92deda3149a8c326e20cfe548e76a960dcbdf2be5ae6e4345605d815de6a2c"} Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.040732 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.053323 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.05330141 podStartE2EDuration="2.05330141s" podCreationTimestamp="2026-01-26 16:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:02.047286761 +0000 UTC m=+1417.208559030" watchObservedRunningTime="2026-01-26 16:29:02.05330141 +0000 UTC m=+1417.214573679" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.086328 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c44b75754-m2rxl"] Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.096787 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c44b75754-m2rxl"] Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.104924 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct57d\" (UniqueName: \"kubernetes.io/projected/22f475ff-bcba-4cdd-a6ed-62be26882b42-kube-api-access-ct57d\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.115587 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.126546 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140095 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140508 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="registry-server" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140520 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="registry-server" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140551 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140559 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140571 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="extract-utilities" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140577 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="extract-utilities" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140587 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="extract-content" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140592 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="extract-content" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140601 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" containerName="kube-state-metrics" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140606 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" containerName="kube-state-metrics" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140618 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140623 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140635 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140640 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: E0126 16:29:02.140647 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon-log" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140653 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon-log" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140811 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" containerName="kube-state-metrics" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140823 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140841 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" containerName="registry-server" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140851 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon-log" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.140862 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.141500 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.147703 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.147824 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.151019 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.227807 4680 scope.go:117] "RemoveContainer" containerID="6c572d5c665c3d2eb553f17cf6e76a99a3c23b8972469d1331f143a83b8bf254" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.247568 4680 scope.go:117] "RemoveContainer" containerID="5209e212e2204abe1f8ecc4fac4c6eb44ed5c43062b98e5c67f2eb3e5e3e6da0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.308524 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptrx\" (UniqueName: \"kubernetes.io/projected/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-api-access-4ptrx\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.308702 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.308744 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.308770 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.410240 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.410280 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.410322 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.411057 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptrx\" (UniqueName: \"kubernetes.io/projected/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-api-access-4ptrx\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.414491 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.414850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.420701 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.437611 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptrx\" (UniqueName: \"kubernetes.io/projected/f08c6bd8-31d6-4769-8ed3-9e46e40a4e66-kube-api-access-4ptrx\") pod \"kube-state-metrics-0\" (UID: \"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66\") " pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.477170 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.658717 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.830035 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-combined-ca-bundle\") pod \"45c10201-3368-42c5-8818-ea02aab4842f\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.830461 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45c10201-3368-42c5-8818-ea02aab4842f-logs\") pod \"45c10201-3368-42c5-8818-ea02aab4842f\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.830561 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-config-data\") pod \"45c10201-3368-42c5-8818-ea02aab4842f\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.830605 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/45c10201-3368-42c5-8818-ea02aab4842f-kube-api-access-x75gm\") pod \"45c10201-3368-42c5-8818-ea02aab4842f\" (UID: \"45c10201-3368-42c5-8818-ea02aab4842f\") " Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.831223 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c10201-3368-42c5-8818-ea02aab4842f-logs" (OuterVolumeSpecName: "logs") pod "45c10201-3368-42c5-8818-ea02aab4842f" (UID: "45c10201-3368-42c5-8818-ea02aab4842f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.834266 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c10201-3368-42c5-8818-ea02aab4842f-kube-api-access-x75gm" (OuterVolumeSpecName: "kube-api-access-x75gm") pod "45c10201-3368-42c5-8818-ea02aab4842f" (UID: "45c10201-3368-42c5-8818-ea02aab4842f"). InnerVolumeSpecName "kube-api-access-x75gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.857131 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-config-data" (OuterVolumeSpecName: "config-data") pod "45c10201-3368-42c5-8818-ea02aab4842f" (UID: "45c10201-3368-42c5-8818-ea02aab4842f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.863583 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45c10201-3368-42c5-8818-ea02aab4842f" (UID: "45c10201-3368-42c5-8818-ea02aab4842f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.932573 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45c10201-3368-42c5-8818-ea02aab4842f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.932608 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.932618 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/45c10201-3368-42c5-8818-ea02aab4842f-kube-api-access-x75gm\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:02 crc kubenswrapper[4680]: I0126 16:29:02.932626 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c10201-3368-42c5-8818-ea02aab4842f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.091411 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.091756 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45c10201-3368-42c5-8818-ea02aab4842f","Type":"ContainerDied","Data":"2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb"} Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.091825 4680 scope.go:117] "RemoveContainer" containerID="2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.093288 4680 generic.go:334] "Generic (PLEG): container finished" podID="45c10201-3368-42c5-8818-ea02aab4842f" containerID="2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb" exitCode=0 Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.093690 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45c10201-3368-42c5-8818-ea02aab4842f","Type":"ContainerDied","Data":"5ca9cb898b2afbefd850eb0db20e32393c359fa1e11bbe54eb6364619724e6f3"} Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.110369 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:29:03 crc kubenswrapper[4680]: W0126 16:29:03.112321 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf08c6bd8_31d6_4769_8ed3_9e46e40a4e66.slice/crio-9881a8cea88caff63bd8045c920f83d841800b6e0590929474625d93332ddc0d WatchSource:0}: Error finding container 9881a8cea88caff63bd8045c920f83d841800b6e0590929474625d93332ddc0d: Status 404 returned error can't find the container with id 9881a8cea88caff63bd8045c920f83d841800b6e0590929474625d93332ddc0d Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.127355 4680 scope.go:117] "RemoveContainer" containerID="ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.158190 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.205995 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f475ff-bcba-4cdd-a6ed-62be26882b42" path="/var/lib/kubelet/pods/22f475ff-bcba-4cdd-a6ed-62be26882b42/volumes" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.207017 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df331d1-8589-47bf-bf1e-29618751c2d3" path="/var/lib/kubelet/pods/3df331d1-8589-47bf-bf1e-29618751c2d3/volumes" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.208361 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" path="/var/lib/kubelet/pods/c61b97a0-f2b3-4935-a1a0-d6e3484410e5/volumes" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.210028 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.210089 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.213879 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-log" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.213912 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-log" Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.213962 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-api" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.213971 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-api" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.214270 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61b97a0-f2b3-4935-a1a0-d6e3484410e5" containerName="horizon" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.214291 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-api" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.214313 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c10201-3368-42c5-8818-ea02aab4842f" containerName="nova-api-log" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.214508 4680 scope.go:117] "RemoveContainer" containerID="2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.215413 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.216981 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.218901 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb\": container with ID starting with 2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb not found: ID does not exist" containerID="2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.219029 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb"} err="failed to get container status \"2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb\": rpc error: code = NotFound desc = could not find container \"2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb\": container with ID starting with 2e828fac96aa5273b88cb876ac73b5ab2bc2c5745d77631be2cd600563fdeecb not found: ID does not exist" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.219132 4680 scope.go:117] "RemoveContainer" containerID="ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.218983 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.219887 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642\": container with ID starting with ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642 not found: ID does not exist" containerID="ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.219967 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642"} err="failed to get container status \"ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642\": rpc error: code = NotFound desc = could not find container \"ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642\": container with ID starting with ade36ca78eaac7e5d01fd7cc0beb90ce080f6ca243e06d6bdfc2d0a6a5565642 not found: ID does not exist" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.328917 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.331086 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="proxy-httpd" containerID="cri-o://ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d" gracePeriod=30 Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.331115 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="sg-core" containerID="cri-o://41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf" gracePeriod=30 Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.331131 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-notification-agent" containerID="cri-o://ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9" gracePeriod=30 Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.331164 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-central-agent" containerID="cri-o://7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c" gracePeriod=30 Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.343747 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-config-data\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.344441 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c88dff-2be3-49c3-b50a-f9a257dc6904-logs\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.344644 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlm75\" (UniqueName: \"kubernetes.io/projected/02c88dff-2be3-49c3-b50a-f9a257dc6904-kube-api-access-rlm75\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.344772 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.359764 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0 is running failed: container process not found" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.360570 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0 is running failed: container process not found" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.360863 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0 is running failed: container process not found" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:29:03 crc kubenswrapper[4680]: E0126 16:29:03.360911 4680 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" containerName="nova-scheduler-scheduler" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.446424 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-config-data\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.446526 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c88dff-2be3-49c3-b50a-f9a257dc6904-logs\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.446548 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlm75\" (UniqueName: \"kubernetes.io/projected/02c88dff-2be3-49c3-b50a-f9a257dc6904-kube-api-access-rlm75\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.446625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.447236 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c88dff-2be3-49c3-b50a-f9a257dc6904-logs\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.459597 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.465876 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-config-data\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.473982 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlm75\" (UniqueName: \"kubernetes.io/projected/02c88dff-2be3-49c3-b50a-f9a257dc6904-kube-api-access-rlm75\") pod \"nova-api-0\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.545596 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:03 crc kubenswrapper[4680]: I0126 16:29:03.996155 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.059527 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-config-data\") pod \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.059659 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwrjv\" (UniqueName: \"kubernetes.io/projected/4a4b45ba-7122-4134-a15a-06b560fe6c4e-kube-api-access-zwrjv\") pod \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.059820 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-combined-ca-bundle\") pod \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\" (UID: \"4a4b45ba-7122-4134-a15a-06b560fe6c4e\") " Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.069477 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4b45ba-7122-4134-a15a-06b560fe6c4e-kube-api-access-zwrjv" (OuterVolumeSpecName: "kube-api-access-zwrjv") pod "4a4b45ba-7122-4134-a15a-06b560fe6c4e" (UID: "4a4b45ba-7122-4134-a15a-06b560fe6c4e"). InnerVolumeSpecName "kube-api-access-zwrjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.134899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66","Type":"ContainerStarted","Data":"9881a8cea88caff63bd8045c920f83d841800b6e0590929474625d93332ddc0d"} Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.142683 4680 generic.go:334] "Generic (PLEG): container finished" podID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerID="ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d" exitCode=0 Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.142817 4680 generic.go:334] "Generic (PLEG): container finished" podID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerID="41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf" exitCode=2 Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.142886 4680 generic.go:334] "Generic (PLEG): container finished" podID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerID="7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c" exitCode=0 Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.142994 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerDied","Data":"ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d"} Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.143084 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerDied","Data":"41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf"} Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.143146 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerDied","Data":"7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c"} Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.146267 4680 generic.go:334] "Generic (PLEG): container finished" podID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" exitCode=0 Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.146408 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.146531 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a4b45ba-7122-4134-a15a-06b560fe6c4e","Type":"ContainerDied","Data":"32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0"} Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.146611 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a4b45ba-7122-4134-a15a-06b560fe6c4e","Type":"ContainerDied","Data":"ff84466368422cc1b31951da3623bef4aa91eaf860310729becf4a8d8e181f31"} Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.146643 4680 scope.go:117] "RemoveContainer" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.163169 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwrjv\" (UniqueName: \"kubernetes.io/projected/4a4b45ba-7122-4134-a15a-06b560fe6c4e-kube-api-access-zwrjv\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.185862 4680 scope.go:117] "RemoveContainer" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.185849 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a4b45ba-7122-4134-a15a-06b560fe6c4e" (UID: "4a4b45ba-7122-4134-a15a-06b560fe6c4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:04 crc kubenswrapper[4680]: E0126 16:29:04.187131 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0\": container with ID starting with 32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0 not found: ID does not exist" containerID="32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.187175 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0"} err="failed to get container status \"32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0\": rpc error: code = NotFound desc = could not find container \"32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0\": container with ID starting with 32076176fda86c4668251db14ca4734edcc23ef8931ea7741171febb5e1198d0 not found: ID does not exist" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.192523 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-config-data" (OuterVolumeSpecName: "config-data") pod "4a4b45ba-7122-4134-a15a-06b560fe6c4e" (UID: "4a4b45ba-7122-4134-a15a-06b560fe6c4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.233669 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.265139 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.265171 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b45ba-7122-4134-a15a-06b560fe6c4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.478967 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.489951 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.513449 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:04 crc kubenswrapper[4680]: E0126 16:29:04.513861 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" containerName="nova-scheduler-scheduler" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.513881 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" containerName="nova-scheduler-scheduler" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.514110 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" containerName="nova-scheduler-scheduler" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.514715 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.516227 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.527898 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.570276 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.570425 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-config-data\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.570467 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c4m\" (UniqueName: \"kubernetes.io/projected/1d74302a-11cc-424f-b8d0-223ccb204523-kube-api-access-t8c4m\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.682979 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-config-data\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.683044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8c4m\" (UniqueName: \"kubernetes.io/projected/1d74302a-11cc-424f-b8d0-223ccb204523-kube-api-access-t8c4m\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.683115 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.688161 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.688940 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-config-data\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.702774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8c4m\" (UniqueName: \"kubernetes.io/projected/1d74302a-11cc-424f-b8d0-223ccb204523-kube-api-access-t8c4m\") pod \"nova-scheduler-0\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:04 crc kubenswrapper[4680]: I0126 16:29:04.833500 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.162058 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02c88dff-2be3-49c3-b50a-f9a257dc6904","Type":"ContainerStarted","Data":"9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b"} Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.162373 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02c88dff-2be3-49c3-b50a-f9a257dc6904","Type":"ContainerStarted","Data":"787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c"} Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.162385 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02c88dff-2be3-49c3-b50a-f9a257dc6904","Type":"ContainerStarted","Data":"c2a732e3be9ccb54bfcaf69d9345dbc34887943eba3a9702123e5c435e66e4ed"} Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.163416 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f08c6bd8-31d6-4769-8ed3-9e46e40a4e66","Type":"ContainerStarted","Data":"e219d24adf0367062d390190f9827ed0b1e8ae45fd72d1d9f9c2b93a1096e932"} Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.163844 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.186373 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.186357716 podStartE2EDuration="2.186357716s" podCreationTimestamp="2026-01-26 16:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:05.185589475 +0000 UTC m=+1420.346861764" watchObservedRunningTime="2026-01-26 16:29:05.186357716 +0000 UTC m=+1420.347629985" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.188010 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c10201-3368-42c5-8818-ea02aab4842f" path="/var/lib/kubelet/pods/45c10201-3368-42c5-8818-ea02aab4842f/volumes" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.189054 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4b45ba-7122-4134-a15a-06b560fe6c4e" path="/var/lib/kubelet/pods/4a4b45ba-7122-4134-a15a-06b560fe6c4e/volumes" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.215699 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.532872202 podStartE2EDuration="3.215676144s" podCreationTimestamp="2026-01-26 16:29:02 +0000 UTC" firstStartedPulling="2026-01-26 16:29:03.126915499 +0000 UTC m=+1418.288187768" lastFinishedPulling="2026-01-26 16:29:03.809719441 +0000 UTC m=+1418.970991710" observedRunningTime="2026-01-26 16:29:05.199737934 +0000 UTC m=+1420.361010203" watchObservedRunningTime="2026-01-26 16:29:05.215676144 +0000 UTC m=+1420.376948403" Jan 26 16:29:05 crc kubenswrapper[4680]: W0126 16:29:05.299042 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d74302a_11cc_424f_b8d0_223ccb204523.slice/crio-b198e72ca22dd34c4b2b87c3214ecb9a90ae0c9fb905ff25fd4f0392cd669685 WatchSource:0}: Error finding container b198e72ca22dd34c4b2b87c3214ecb9a90ae0c9fb905ff25fd4f0392cd669685: Status 404 returned error can't find the container with id b198e72ca22dd34c4b2b87c3214ecb9a90ae0c9fb905ff25fd4f0392cd669685 Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.305935 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.750020 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822287 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw8fj\" (UniqueName: \"kubernetes.io/projected/d6422e03-8a4e-48fe-8413-c88c3d137eab-kube-api-access-qw8fj\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822333 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-config-data\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822367 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-scripts\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822548 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-combined-ca-bundle\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822579 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-log-httpd\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822657 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-run-httpd\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.822691 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-sg-core-conf-yaml\") pod \"d6422e03-8a4e-48fe-8413-c88c3d137eab\" (UID: \"d6422e03-8a4e-48fe-8413-c88c3d137eab\") " Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.823618 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.823712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.827413 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6422e03-8a4e-48fe-8413-c88c3d137eab-kube-api-access-qw8fj" (OuterVolumeSpecName: "kube-api-access-qw8fj") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "kube-api-access-qw8fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.838045 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-scripts" (OuterVolumeSpecName: "scripts") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.865589 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.925502 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw8fj\" (UniqueName: \"kubernetes.io/projected/d6422e03-8a4e-48fe-8413-c88c3d137eab-kube-api-access-qw8fj\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.925543 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.925555 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.925592 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6422e03-8a4e-48fe-8413-c88c3d137eab-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.925603 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.929173 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-config-data" (OuterVolumeSpecName: "config-data") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:05 crc kubenswrapper[4680]: I0126 16:29:05.930763 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6422e03-8a4e-48fe-8413-c88c3d137eab" (UID: "d6422e03-8a4e-48fe-8413-c88c3d137eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.027504 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.027545 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6422e03-8a4e-48fe-8413-c88c3d137eab-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.176665 4680 generic.go:334] "Generic (PLEG): container finished" podID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerID="ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9" exitCode=0 Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.176728 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerDied","Data":"ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9"} Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.176754 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6422e03-8a4e-48fe-8413-c88c3d137eab","Type":"ContainerDied","Data":"a0ca72b5fa5d082d103695230fd2e4af0eafc102f86d0c97f51120d85ab5def2"} Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.176771 4680 scope.go:117] "RemoveContainer" containerID="ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.176876 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.180117 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d74302a-11cc-424f-b8d0-223ccb204523","Type":"ContainerStarted","Data":"5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36"} Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.180149 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d74302a-11cc-424f-b8d0-223ccb204523","Type":"ContainerStarted","Data":"b198e72ca22dd34c4b2b87c3214ecb9a90ae0c9fb905ff25fd4f0392cd669685"} Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.213780 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.21375952 podStartE2EDuration="2.21375952s" podCreationTimestamp="2026-01-26 16:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:06.205678052 +0000 UTC m=+1421.366950321" watchObservedRunningTime="2026-01-26 16:29:06.21375952 +0000 UTC m=+1421.375031789" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.227576 4680 scope.go:117] "RemoveContainer" containerID="41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.232148 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.244323 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.272712 4680 scope.go:117] "RemoveContainer" containerID="ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275209 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.275638 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-notification-agent" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275655 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-notification-agent" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.275678 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-central-agent" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275685 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-central-agent" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.275696 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="proxy-httpd" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275702 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="proxy-httpd" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.275722 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="sg-core" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275727 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="sg-core" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275913 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-notification-agent" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275936 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="proxy-httpd" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275949 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="ceilometer-central-agent" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.275958 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" containerName="sg-core" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.277641 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.280672 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.280862 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.288492 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.299860 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.318636 4680 scope.go:117] "RemoveContainer" containerID="7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.333087 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.333212 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-log-httpd\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.333254 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.341013 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/6e359015-1a2c-4f1a-80ba-63d6d57912af-kube-api-access-cdhmx\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.341129 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.341186 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-scripts\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.341285 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-run-httpd\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.341356 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-config-data\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.355202 4680 scope.go:117] "RemoveContainer" containerID="ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.356512 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d\": container with ID starting with ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d not found: ID does not exist" containerID="ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.356561 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d"} err="failed to get container status \"ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d\": rpc error: code = NotFound desc = could not find container \"ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d\": container with ID starting with ad72e9714a5c331b64f8ac805007027af715a9932587e973d5aa95622cdd5e4d not found: ID does not exist" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.356592 4680 scope.go:117] "RemoveContainer" containerID="41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.357490 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf\": container with ID starting with 41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf not found: ID does not exist" containerID="41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.357525 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf"} err="failed to get container status \"41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf\": rpc error: code = NotFound desc = could not find container \"41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf\": container with ID starting with 41977d23a4dd12a34be787145bcde338333fd296855c6bb8adf4678bedfeb7bf not found: ID does not exist" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.357547 4680 scope.go:117] "RemoveContainer" containerID="ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.358313 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9\": container with ID starting with ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9 not found: ID does not exist" containerID="ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.358346 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9"} err="failed to get container status \"ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9\": rpc error: code = NotFound desc = could not find container \"ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9\": container with ID starting with ef534806a8578c0e8a958718f714a5fd82906abc8135ee8434693c9b74844ab9 not found: ID does not exist" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.358368 4680 scope.go:117] "RemoveContainer" containerID="7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c" Jan 26 16:29:06 crc kubenswrapper[4680]: E0126 16:29:06.359036 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c\": container with ID starting with 7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c not found: ID does not exist" containerID="7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.359080 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c"} err="failed to get container status \"7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c\": rpc error: code = NotFound desc = could not find container \"7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c\": container with ID starting with 7ce51e3b9c38eb7c06e4dd526a0c397cd15a0bf89718b6b9538fcf88f14d3b7c not found: ID does not exist" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443208 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-log-httpd\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443297 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443327 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/6e359015-1a2c-4f1a-80ba-63d6d57912af-kube-api-access-cdhmx\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443385 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443412 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-scripts\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443464 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-run-httpd\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443498 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-config-data\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.443548 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.444447 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-log-httpd\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.445643 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-run-httpd\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.450916 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.452581 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.453321 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-scripts\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.454417 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-config-data\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.463028 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.467207 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/6e359015-1a2c-4f1a-80ba-63d6d57912af-kube-api-access-cdhmx\") pod \"ceilometer-0\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " pod="openstack/ceilometer-0" Jan 26 16:29:06 crc kubenswrapper[4680]: I0126 16:29:06.610413 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:07 crc kubenswrapper[4680]: I0126 16:29:07.061771 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:07 crc kubenswrapper[4680]: I0126 16:29:07.193526 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6422e03-8a4e-48fe-8413-c88c3d137eab" path="/var/lib/kubelet/pods/d6422e03-8a4e-48fe-8413-c88c3d137eab/volumes" Jan 26 16:29:07 crc kubenswrapper[4680]: I0126 16:29:07.222358 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerStarted","Data":"94ced415ac116972efbe47ca82208a03cb8969e0febd7a0290ed96c5a62bdd47"} Jan 26 16:29:08 crc kubenswrapper[4680]: I0126 16:29:08.232218 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerStarted","Data":"e5dead3efe734487e7c2e8be96b58c0e4e9f19de4f103ea8477283a8dfe3198b"} Jan 26 16:29:08 crc kubenswrapper[4680]: I0126 16:29:08.232498 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerStarted","Data":"27cec085972638723c4fa757251ef1e00ae43e3acf61fcda3faa073cdf20e2df"} Jan 26 16:29:09 crc kubenswrapper[4680]: I0126 16:29:09.262947 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerStarted","Data":"7060b6e7b7e8b4821ded18145bc726e8871c8de62ed6a3bbe752e30bcdf578ef"} Jan 26 16:29:09 crc kubenswrapper[4680]: I0126 16:29:09.834728 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 16:29:10 crc kubenswrapper[4680]: I0126 16:29:10.214330 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 16:29:10 crc kubenswrapper[4680]: I0126 16:29:10.282079 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerStarted","Data":"f4cfad1f99ee8b81bb5604fc72e33cb26e913088ac1529fc14deacb7b9e2557e"} Jan 26 16:29:10 crc kubenswrapper[4680]: I0126 16:29:10.282565 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:29:10 crc kubenswrapper[4680]: I0126 16:29:10.310687 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.646857921 podStartE2EDuration="4.310664255s" podCreationTimestamp="2026-01-26 16:29:06 +0000 UTC" firstStartedPulling="2026-01-26 16:29:07.077714398 +0000 UTC m=+1422.238986667" lastFinishedPulling="2026-01-26 16:29:09.741520732 +0000 UTC m=+1424.902793001" observedRunningTime="2026-01-26 16:29:10.300719124 +0000 UTC m=+1425.461991403" watchObservedRunningTime="2026-01-26 16:29:10.310664255 +0000 UTC m=+1425.471936524" Jan 26 16:29:11 crc kubenswrapper[4680]: E0126 16:29:11.077252 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15c480_9662_404f_9778_d4e130490ed0.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:29:12 crc kubenswrapper[4680]: I0126 16:29:12.495200 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 16:29:13 crc kubenswrapper[4680]: I0126 16:29:13.546494 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:29:13 crc kubenswrapper[4680]: I0126 16:29:13.546808 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:29:14 crc kubenswrapper[4680]: I0126 16:29:14.628323 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:29:14 crc kubenswrapper[4680]: I0126 16:29:14.628350 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:29:14 crc kubenswrapper[4680]: I0126 16:29:14.834106 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 16:29:14 crc kubenswrapper[4680]: I0126 16:29:14.861149 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 16:29:15 crc kubenswrapper[4680]: I0126 16:29:15.380666 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 16:29:16 crc kubenswrapper[4680]: I0126 16:29:16.980964 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:29:16 crc kubenswrapper[4680]: I0126 16:29:16.981334 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.219877 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.350254 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.369246 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76vg5\" (UniqueName: \"kubernetes.io/projected/c26ddc5f-c890-49a0-b720-627df50abaaa-kube-api-access-76vg5\") pod \"c26ddc5f-c890-49a0-b720-627df50abaaa\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.369314 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-config-data\") pod \"c26ddc5f-c890-49a0-b720-627df50abaaa\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.369384 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-combined-ca-bundle\") pod \"c26ddc5f-c890-49a0-b720-627df50abaaa\" (UID: \"c26ddc5f-c890-49a0-b720-627df50abaaa\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.381360 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26ddc5f-c890-49a0-b720-627df50abaaa-kube-api-access-76vg5" (OuterVolumeSpecName: "kube-api-access-76vg5") pod "c26ddc5f-c890-49a0-b720-627df50abaaa" (UID: "c26ddc5f-c890-49a0-b720-627df50abaaa"). InnerVolumeSpecName "kube-api-access-76vg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.389313 4680 generic.go:334] "Generic (PLEG): container finished" podID="c26ddc5f-c890-49a0-b720-627df50abaaa" containerID="03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51" exitCode=137 Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.389432 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c26ddc5f-c890-49a0-b720-627df50abaaa","Type":"ContainerDied","Data":"03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51"} Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.389466 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c26ddc5f-c890-49a0-b720-627df50abaaa","Type":"ContainerDied","Data":"1b4e8f0796ac0715b7571189aea9683b969a7b099810dae1b70ebd7ef8c51fb3"} Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.389559 4680 scope.go:117] "RemoveContainer" containerID="03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.389723 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.397711 4680 generic.go:334] "Generic (PLEG): container finished" podID="15819343-28c9-4353-92ec-600a3e910bcb" containerID="30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1" exitCode=137 Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.397765 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15819343-28c9-4353-92ec-600a3e910bcb","Type":"ContainerDied","Data":"30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1"} Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.397797 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15819343-28c9-4353-92ec-600a3e910bcb","Type":"ContainerDied","Data":"a541ffe87010d8a13c68121fbe9890746efca5adb5270f839c1e227bd50f138f"} Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.397859 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.424546 4680 scope.go:117] "RemoveContainer" containerID="03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51" Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.425060 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51\": container with ID starting with 03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51 not found: ID does not exist" containerID="03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.425139 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51"} err="failed to get container status \"03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51\": rpc error: code = NotFound desc = could not find container \"03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51\": container with ID starting with 03d8e17a0b026c46fd74d42ac533567247b6a1965cacebfe6a57783e10e53f51 not found: ID does not exist" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.425215 4680 scope.go:117] "RemoveContainer" containerID="30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.439382 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-config-data" (OuterVolumeSpecName: "config-data") pod "c26ddc5f-c890-49a0-b720-627df50abaaa" (UID: "c26ddc5f-c890-49a0-b720-627df50abaaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.446420 4680 scope.go:117] "RemoveContainer" containerID="9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.451210 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c26ddc5f-c890-49a0-b720-627df50abaaa" (UID: "c26ddc5f-c890-49a0-b720-627df50abaaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.464669 4680 scope.go:117] "RemoveContainer" containerID="30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1" Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.465090 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1\": container with ID starting with 30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1 not found: ID does not exist" containerID="30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.465141 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1"} err="failed to get container status \"30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1\": rpc error: code = NotFound desc = could not find container \"30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1\": container with ID starting with 30746b9ce5c166cb24802797dcbe02be9695a4426c0b97dba96526078441f9a1 not found: ID does not exist" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.465172 4680 scope.go:117] "RemoveContainer" containerID="9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf" Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.465631 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf\": container with ID starting with 9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf not found: ID does not exist" containerID="9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.465660 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf"} err="failed to get container status \"9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf\": rpc error: code = NotFound desc = could not find container \"9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf\": container with ID starting with 9a32b91489bcf0cf05091cd2c0659ea0bfae11d0beb6bdb63026383f0aa7a5bf not found: ID does not exist" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.470453 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-config-data\") pod \"15819343-28c9-4353-92ec-600a3e910bcb\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.470653 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-combined-ca-bundle\") pod \"15819343-28c9-4353-92ec-600a3e910bcb\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.470690 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xdsv\" (UniqueName: \"kubernetes.io/projected/15819343-28c9-4353-92ec-600a3e910bcb-kube-api-access-8xdsv\") pod \"15819343-28c9-4353-92ec-600a3e910bcb\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.470722 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15819343-28c9-4353-92ec-600a3e910bcb-logs\") pod \"15819343-28c9-4353-92ec-600a3e910bcb\" (UID: \"15819343-28c9-4353-92ec-600a3e910bcb\") " Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.471323 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15819343-28c9-4353-92ec-600a3e910bcb-logs" (OuterVolumeSpecName: "logs") pod "15819343-28c9-4353-92ec-600a3e910bcb" (UID: "15819343-28c9-4353-92ec-600a3e910bcb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.471846 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76vg5\" (UniqueName: \"kubernetes.io/projected/c26ddc5f-c890-49a0-b720-627df50abaaa-kube-api-access-76vg5\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.471874 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.471890 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26ddc5f-c890-49a0-b720-627df50abaaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.471903 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15819343-28c9-4353-92ec-600a3e910bcb-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.474267 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15819343-28c9-4353-92ec-600a3e910bcb-kube-api-access-8xdsv" (OuterVolumeSpecName: "kube-api-access-8xdsv") pod "15819343-28c9-4353-92ec-600a3e910bcb" (UID: "15819343-28c9-4353-92ec-600a3e910bcb"). InnerVolumeSpecName "kube-api-access-8xdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.491894 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15c480_9662_404f_9778_d4e130490ed0.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.499887 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15819343-28c9-4353-92ec-600a3e910bcb" (UID: "15819343-28c9-4353-92ec-600a3e910bcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.500419 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-config-data" (OuterVolumeSpecName: "config-data") pod "15819343-28c9-4353-92ec-600a3e910bcb" (UID: "15819343-28c9-4353-92ec-600a3e910bcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.573737 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.574040 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15819343-28c9-4353-92ec-600a3e910bcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.574053 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xdsv\" (UniqueName: \"kubernetes.io/projected/15819343-28c9-4353-92ec-600a3e910bcb-kube-api-access-8xdsv\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.721635 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.729097 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.742874 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.750466 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.759768 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.760211 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-log" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.760237 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-log" Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.760255 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26ddc5f-c890-49a0-b720-627df50abaaa" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.760264 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26ddc5f-c890-49a0-b720-627df50abaaa" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 16:29:21 crc kubenswrapper[4680]: E0126 16:29:21.760285 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-metadata" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.760292 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-metadata" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.760510 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-metadata" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.760537 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="15819343-28c9-4353-92ec-600a3e910bcb" containerName="nova-metadata-log" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.760553 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26ddc5f-c890-49a0-b720-627df50abaaa" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.761204 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.764043 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.764285 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.764410 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.768061 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.827259 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.828832 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.831189 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.831246 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.846697 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.879666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.879728 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.879954 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.880029 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.880161 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4rq\" (UniqueName: \"kubernetes.io/projected/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-kube-api-access-xq4rq\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.981893 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-logs\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.981929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.981974 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982036 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrd4b\" (UniqueName: \"kubernetes.io/projected/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-kube-api-access-wrd4b\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982055 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982095 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982113 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982152 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4rq\" (UniqueName: \"kubernetes.io/projected/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-kube-api-access-xq4rq\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982185 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-config-data\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.982220 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.986921 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.987256 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.987394 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.988214 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:21 crc kubenswrapper[4680]: I0126 16:29:21.998568 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4rq\" (UniqueName: \"kubernetes.io/projected/cedda5b4-a1eb-4b0a-aa4b-540c258d02e1-kube-api-access-xq4rq\") pod \"nova-cell1-novncproxy-0\" (UID: \"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.083877 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd4b\" (UniqueName: \"kubernetes.io/projected/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-kube-api-access-wrd4b\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.085162 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.085350 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-config-data\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.085461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-logs\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.085477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.086284 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-logs\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.097847 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.097901 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-config-data\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.098808 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.100774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd4b\" (UniqueName: \"kubernetes.io/projected/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-kube-api-access-wrd4b\") pod \"nova-metadata-0\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.128180 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.145368 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.606650 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:22 crc kubenswrapper[4680]: W0126 16:29:22.611556 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dfa4211_539a_410c_b22a_f0aa33bd2ea2.slice/crio-e6751d143c7d2e240d36a6fb5f4ddaef68bd2cb24500b47d0559d48a57ae4593 WatchSource:0}: Error finding container e6751d143c7d2e240d36a6fb5f4ddaef68bd2cb24500b47d0559d48a57ae4593: Status 404 returned error can't find the container with id e6751d143c7d2e240d36a6fb5f4ddaef68bd2cb24500b47d0559d48a57ae4593 Jan 26 16:29:22 crc kubenswrapper[4680]: I0126 16:29:22.615157 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.181771 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15819343-28c9-4353-92ec-600a3e910bcb" path="/var/lib/kubelet/pods/15819343-28c9-4353-92ec-600a3e910bcb/volumes" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.183156 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26ddc5f-c890-49a0-b720-627df50abaaa" path="/var/lib/kubelet/pods/c26ddc5f-c890-49a0-b720-627df50abaaa/volumes" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.421453 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dfa4211-539a-410c-b22a-f0aa33bd2ea2","Type":"ContainerStarted","Data":"6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b"} Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.421766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dfa4211-539a-410c-b22a-f0aa33bd2ea2","Type":"ContainerStarted","Data":"19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd"} Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.421776 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dfa4211-539a-410c-b22a-f0aa33bd2ea2","Type":"ContainerStarted","Data":"e6751d143c7d2e240d36a6fb5f4ddaef68bd2cb24500b47d0559d48a57ae4593"} Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.424739 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1","Type":"ContainerStarted","Data":"2ace3e9a194bbfe4c16bda99a8366e0ba69437707c5b102264ad9f9fa27b3be2"} Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.424774 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cedda5b4-a1eb-4b0a-aa4b-540c258d02e1","Type":"ContainerStarted","Data":"e4512f216b05a2dfc81e72a5e58015f8a697cd33ed42b28e874dc78d676f8df9"} Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.447483 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.447461981 podStartE2EDuration="2.447461981s" podCreationTimestamp="2026-01-26 16:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:23.447381978 +0000 UTC m=+1438.608654247" watchObservedRunningTime="2026-01-26 16:29:23.447461981 +0000 UTC m=+1438.608734260" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.476666 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.476643485 podStartE2EDuration="2.476643485s" podCreationTimestamp="2026-01-26 16:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:23.467013003 +0000 UTC m=+1438.628285272" watchObservedRunningTime="2026-01-26 16:29:23.476643485 +0000 UTC m=+1438.637915754" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.551944 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.552534 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.553826 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:29:23 crc kubenswrapper[4680]: I0126 16:29:23.558422 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.435175 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.440816 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.656202 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8f686847-5rzkm"] Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.658991 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.675146 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f686847-5rzkm"] Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.742778 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9cn\" (UniqueName: \"kubernetes.io/projected/324b761d-e4a0-47ba-99ee-f3a561ac07a3-kube-api-access-8d9cn\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.743260 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-nb\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.743320 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-svc\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.743355 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-config\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.743384 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-swift-storage-0\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.743406 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-sb\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.844769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-svc\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.844836 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-config\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.844877 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-sb\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.844898 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-swift-storage-0\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.845012 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9cn\" (UniqueName: \"kubernetes.io/projected/324b761d-e4a0-47ba-99ee-f3a561ac07a3-kube-api-access-8d9cn\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.845096 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-nb\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.846016 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-nb\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.846793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-config\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.846906 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-swift-storage-0\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.847479 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-sb\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.848089 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-svc\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.889454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9cn\" (UniqueName: \"kubernetes.io/projected/324b761d-e4a0-47ba-99ee-f3a561ac07a3-kube-api-access-8d9cn\") pod \"dnsmasq-dns-8f686847-5rzkm\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:24 crc kubenswrapper[4680]: I0126 16:29:24.992699 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:25 crc kubenswrapper[4680]: I0126 16:29:25.603983 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f686847-5rzkm"] Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.459021 4680 generic.go:334] "Generic (PLEG): container finished" podID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerID="ba2e4164c3c75a3c1736f3a136a4936b66d8c76359a8f9a008b16e795dbdaf98" exitCode=0 Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.459291 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f686847-5rzkm" event={"ID":"324b761d-e4a0-47ba-99ee-f3a561ac07a3","Type":"ContainerDied","Data":"ba2e4164c3c75a3c1736f3a136a4936b66d8c76359a8f9a008b16e795dbdaf98"} Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.459464 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f686847-5rzkm" event={"ID":"324b761d-e4a0-47ba-99ee-f3a561ac07a3","Type":"ContainerStarted","Data":"14e7adce2bb6b88ebdcd1f245b1de7cd4119b41a21737bafe68103af000fcc98"} Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.968736 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.969628 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-central-agent" containerID="cri-o://27cec085972638723c4fa757251ef1e00ae43e3acf61fcda3faa073cdf20e2df" gracePeriod=30 Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.969844 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="proxy-httpd" containerID="cri-o://f4cfad1f99ee8b81bb5604fc72e33cb26e913088ac1529fc14deacb7b9e2557e" gracePeriod=30 Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.969909 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-notification-agent" containerID="cri-o://e5dead3efe734487e7c2e8be96b58c0e4e9f19de4f103ea8477283a8dfe3198b" gracePeriod=30 Jan 26 16:29:26 crc kubenswrapper[4680]: I0126 16:29:26.970053 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="sg-core" containerID="cri-o://7060b6e7b7e8b4821ded18145bc726e8871c8de62ed6a3bbe752e30bcdf578ef" gracePeriod=30 Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.002125 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.128675 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.146032 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.146103 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.184939 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.470352 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f686847-5rzkm" event={"ID":"324b761d-e4a0-47ba-99ee-f3a561ac07a3","Type":"ContainerStarted","Data":"05672050f3386b5bffd5d30de43ddd32c63f44e136ca849a73e5cab96ba3be83"} Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.470849 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473316 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerID="f4cfad1f99ee8b81bb5604fc72e33cb26e913088ac1529fc14deacb7b9e2557e" exitCode=0 Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473349 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerID="7060b6e7b7e8b4821ded18145bc726e8871c8de62ed6a3bbe752e30bcdf578ef" exitCode=2 Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473360 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerID="27cec085972638723c4fa757251ef1e00ae43e3acf61fcda3faa073cdf20e2df" exitCode=0 Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473537 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-log" containerID="cri-o://787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c" gracePeriod=30 Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473832 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerDied","Data":"f4cfad1f99ee8b81bb5604fc72e33cb26e913088ac1529fc14deacb7b9e2557e"} Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473868 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerDied","Data":"7060b6e7b7e8b4821ded18145bc726e8871c8de62ed6a3bbe752e30bcdf578ef"} Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473883 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerDied","Data":"27cec085972638723c4fa757251ef1e00ae43e3acf61fcda3faa073cdf20e2df"} Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.473945 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-api" containerID="cri-o://9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b" gracePeriod=30 Jan 26 16:29:27 crc kubenswrapper[4680]: I0126 16:29:27.493675 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8f686847-5rzkm" podStartSLOduration=3.493658853 podStartE2EDuration="3.493658853s" podCreationTimestamp="2026-01-26 16:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:27.490477993 +0000 UTC m=+1442.651750262" watchObservedRunningTime="2026-01-26 16:29:27.493658853 +0000 UTC m=+1442.654931122" Jan 26 16:29:28 crc kubenswrapper[4680]: I0126 16:29:28.483355 4680 generic.go:334] "Generic (PLEG): container finished" podID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerID="787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c" exitCode=143 Jan 26 16:29:28 crc kubenswrapper[4680]: I0126 16:29:28.483456 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02c88dff-2be3-49c3-b50a-f9a257dc6904","Type":"ContainerDied","Data":"787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c"} Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.511586 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerID="e5dead3efe734487e7c2e8be96b58c0e4e9f19de4f103ea8477283a8dfe3198b" exitCode=0 Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.511888 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerDied","Data":"e5dead3efe734487e7c2e8be96b58c0e4e9f19de4f103ea8477283a8dfe3198b"} Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.785809 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866064 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-combined-ca-bundle\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866173 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-ceilometer-tls-certs\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866224 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-scripts\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866285 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-run-httpd\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866305 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-config-data\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866492 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/6e359015-1a2c-4f1a-80ba-63d6d57912af-kube-api-access-cdhmx\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866523 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-log-httpd\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.866549 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-sg-core-conf-yaml\") pod \"6e359015-1a2c-4f1a-80ba-63d6d57912af\" (UID: \"6e359015-1a2c-4f1a-80ba-63d6d57912af\") " Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.867500 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.868055 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.877293 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-scripts" (OuterVolumeSpecName: "scripts") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.884241 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e359015-1a2c-4f1a-80ba-63d6d57912af-kube-api-access-cdhmx" (OuterVolumeSpecName: "kube-api-access-cdhmx") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "kube-api-access-cdhmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.905854 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.955015 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.970300 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdhmx\" (UniqueName: \"kubernetes.io/projected/6e359015-1a2c-4f1a-80ba-63d6d57912af-kube-api-access-cdhmx\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.970334 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.970344 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.970353 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.970361 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.970372 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e359015-1a2c-4f1a-80ba-63d6d57912af-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.981858 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:29 crc kubenswrapper[4680]: I0126 16:29:29.998282 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-config-data" (OuterVolumeSpecName: "config-data") pod "6e359015-1a2c-4f1a-80ba-63d6d57912af" (UID: "6e359015-1a2c-4f1a-80ba-63d6d57912af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.071616 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.071652 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e359015-1a2c-4f1a-80ba-63d6d57912af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.521924 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e359015-1a2c-4f1a-80ba-63d6d57912af","Type":"ContainerDied","Data":"94ced415ac116972efbe47ca82208a03cb8969e0febd7a0290ed96c5a62bdd47"} Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.522263 4680 scope.go:117] "RemoveContainer" containerID="f4cfad1f99ee8b81bb5604fc72e33cb26e913088ac1529fc14deacb7b9e2557e" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.522410 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.543533 4680 scope.go:117] "RemoveContainer" containerID="7060b6e7b7e8b4821ded18145bc726e8871c8de62ed6a3bbe752e30bcdf578ef" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.564529 4680 scope.go:117] "RemoveContainer" containerID="e5dead3efe734487e7c2e8be96b58c0e4e9f19de4f103ea8477283a8dfe3198b" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.565411 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.576515 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.590154 4680 scope.go:117] "RemoveContainer" containerID="27cec085972638723c4fa757251ef1e00ae43e3acf61fcda3faa073cdf20e2df" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.592494 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:30 crc kubenswrapper[4680]: E0126 16:29:30.592898 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-notification-agent" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.592916 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-notification-agent" Jan 26 16:29:30 crc kubenswrapper[4680]: E0126 16:29:30.592944 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="proxy-httpd" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.592952 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="proxy-httpd" Jan 26 16:29:30 crc kubenswrapper[4680]: E0126 16:29:30.592972 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-central-agent" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.592980 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-central-agent" Jan 26 16:29:30 crc kubenswrapper[4680]: E0126 16:29:30.592991 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="sg-core" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.592999 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="sg-core" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.593690 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="sg-core" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.593717 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="proxy-httpd" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.593728 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-notification-agent" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.593748 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" containerName="ceilometer-central-agent" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.604526 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.606886 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.606943 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.606899 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.610061 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796238 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-config-data\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796274 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-log-httpd\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796308 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796354 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-run-httpd\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796437 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796491 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkz8\" (UniqueName: \"kubernetes.io/projected/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-kube-api-access-6bkz8\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.796527 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-scripts\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898529 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-config-data\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898580 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-log-httpd\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898666 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-run-httpd\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898722 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898746 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898805 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bkz8\" (UniqueName: \"kubernetes.io/projected/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-kube-api-access-6bkz8\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.898843 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-scripts\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.900144 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-run-httpd\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.900200 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-log-httpd\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.904510 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-scripts\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.912367 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.914538 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.921718 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.924391 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-config-data\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.925190 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bkz8\" (UniqueName: \"kubernetes.io/projected/f43c12ef-d0d4-4ff9-802f-652e3e4188cc-kube-api-access-6bkz8\") pod \"ceilometer-0\" (UID: \"f43c12ef-d0d4-4ff9-802f-652e3e4188cc\") " pod="openstack/ceilometer-0" Jan 26 16:29:30 crc kubenswrapper[4680]: I0126 16:29:30.936033 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.084311 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.188925 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e359015-1a2c-4f1a-80ba-63d6d57912af" path="/var/lib/kubelet/pods/6e359015-1a2c-4f1a-80ba-63d6d57912af/volumes" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.202682 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-combined-ca-bundle\") pod \"02c88dff-2be3-49c3-b50a-f9a257dc6904\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.202949 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlm75\" (UniqueName: \"kubernetes.io/projected/02c88dff-2be3-49c3-b50a-f9a257dc6904-kube-api-access-rlm75\") pod \"02c88dff-2be3-49c3-b50a-f9a257dc6904\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.203046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c88dff-2be3-49c3-b50a-f9a257dc6904-logs\") pod \"02c88dff-2be3-49c3-b50a-f9a257dc6904\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.203258 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-config-data\") pod \"02c88dff-2be3-49c3-b50a-f9a257dc6904\" (UID: \"02c88dff-2be3-49c3-b50a-f9a257dc6904\") " Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.205400 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c88dff-2be3-49c3-b50a-f9a257dc6904-logs" (OuterVolumeSpecName: "logs") pod "02c88dff-2be3-49c3-b50a-f9a257dc6904" (UID: "02c88dff-2be3-49c3-b50a-f9a257dc6904"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.216499 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c88dff-2be3-49c3-b50a-f9a257dc6904-kube-api-access-rlm75" (OuterVolumeSpecName: "kube-api-access-rlm75") pod "02c88dff-2be3-49c3-b50a-f9a257dc6904" (UID: "02c88dff-2be3-49c3-b50a-f9a257dc6904"). InnerVolumeSpecName "kube-api-access-rlm75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.274584 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-config-data" (OuterVolumeSpecName: "config-data") pod "02c88dff-2be3-49c3-b50a-f9a257dc6904" (UID: "02c88dff-2be3-49c3-b50a-f9a257dc6904"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.291294 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02c88dff-2be3-49c3-b50a-f9a257dc6904" (UID: "02c88dff-2be3-49c3-b50a-f9a257dc6904"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.309560 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.309798 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlm75\" (UniqueName: \"kubernetes.io/projected/02c88dff-2be3-49c3-b50a-f9a257dc6904-kube-api-access-rlm75\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.309877 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c88dff-2be3-49c3-b50a-f9a257dc6904-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.309982 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c88dff-2be3-49c3-b50a-f9a257dc6904-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.494102 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.560650 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerStarted","Data":"19080d42b645ab7d9d15578ed16a83926010ac3020e951c02e755efbc434a65f"} Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.574614 4680 generic.go:334] "Generic (PLEG): container finished" podID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerID="9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b" exitCode=0 Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.574663 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02c88dff-2be3-49c3-b50a-f9a257dc6904","Type":"ContainerDied","Data":"9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b"} Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.574692 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02c88dff-2be3-49c3-b50a-f9a257dc6904","Type":"ContainerDied","Data":"c2a732e3be9ccb54bfcaf69d9345dbc34887943eba3a9702123e5c435e66e4ed"} Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.574712 4680 scope.go:117] "RemoveContainer" containerID="9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.574863 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.623387 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.659692 4680 scope.go:117] "RemoveContainer" containerID="787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.659705 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.680834 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:31 crc kubenswrapper[4680]: E0126 16:29:31.681303 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-api" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.681317 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-api" Jan 26 16:29:31 crc kubenswrapper[4680]: E0126 16:29:31.681343 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-log" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.681350 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-log" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.681517 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-api" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.681538 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" containerName="nova-api-log" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.682533 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.685207 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.685371 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.694264 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.723605 4680 scope.go:117] "RemoveContainer" containerID="9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b" Jan 26 16:29:31 crc kubenswrapper[4680]: E0126 16:29:31.725214 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b\": container with ID starting with 9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b not found: ID does not exist" containerID="9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.725256 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b"} err="failed to get container status \"9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b\": rpc error: code = NotFound desc = could not find container \"9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b\": container with ID starting with 9cea28d1ab241bdb696bf2fe7c573e7efb9f33448061155264ac6cb57903ed9b not found: ID does not exist" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.725279 4680 scope.go:117] "RemoveContainer" containerID="787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.727237 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:31 crc kubenswrapper[4680]: E0126 16:29:31.728344 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c\": container with ID starting with 787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c not found: ID does not exist" containerID="787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.728382 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c"} err="failed to get container status \"787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c\": rpc error: code = NotFound desc = could not find container \"787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c\": container with ID starting with 787c280b88feeb73a5cffaae1f315565a6091a51f7041402817b96b0351cdd6c not found: ID does not exist" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.820803 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.821524 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgfm\" (UniqueName: \"kubernetes.io/projected/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-kube-api-access-shgfm\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.821960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.822193 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.822701 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-config-data\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.823028 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-logs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: E0126 16:29:31.908453 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c88dff_2be3_49c3_b50a_f9a257dc6904.slice/crio-c2a732e3be9ccb54bfcaf69d9345dbc34887943eba3a9702123e5c435e66e4ed\": RecentStats: unable to find data in memory cache]" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.925196 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.925307 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shgfm\" (UniqueName: \"kubernetes.io/projected/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-kube-api-access-shgfm\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.925348 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.925384 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.925429 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-config-data\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.925495 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-logs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.927209 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-logs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.933975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.934134 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.934755 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.943521 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-config-data\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:31 crc kubenswrapper[4680]: I0126 16:29:31.946726 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shgfm\" (UniqueName: \"kubernetes.io/projected/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-kube-api-access-shgfm\") pod \"nova-api-0\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " pod="openstack/nova-api-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.020549 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.129422 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.146202 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.146266 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.168241 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.515794 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.606753 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e4c9878-b19c-4b05-8962-d42e4f0c35f7","Type":"ContainerStarted","Data":"bfef7c07b2df5439252b64e0a46c6afd704bde29e17ec8f19483520e440529f3"} Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.610608 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerStarted","Data":"69459bf2e39c90b9660ec1a21ca5c5578dd2d311a502933711cae5e263d1cac0"} Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.610678 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerStarted","Data":"0f7e70ab3a5e0002633849b05e2bf96b4ce0e2caa442de96c480d37d0ee9986f"} Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.634392 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.906778 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ckt42"] Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.908328 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.912113 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.912417 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.932942 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ckt42"] Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.957734 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-scripts\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.957856 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.957939 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ll5\" (UniqueName: \"kubernetes.io/projected/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-kube-api-access-w2ll5\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:32 crc kubenswrapper[4680]: I0126 16:29:32.958598 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-config-data\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.061213 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-config-data\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.061317 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-scripts\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.061340 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.061368 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ll5\" (UniqueName: \"kubernetes.io/projected/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-kube-api-access-w2ll5\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.083052 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.083534 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-config-data\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.083559 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-scripts\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.096729 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ll5\" (UniqueName: \"kubernetes.io/projected/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-kube-api-access-w2ll5\") pod \"nova-cell1-cell-mapping-ckt42\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.207928 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c88dff-2be3-49c3-b50a-f9a257dc6904" path="/var/lib/kubelet/pods/02c88dff-2be3-49c3-b50a-f9a257dc6904/volumes" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.211139 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.211427 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.260580 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.630218 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e4c9878-b19c-4b05-8962-d42e4f0c35f7","Type":"ContainerStarted","Data":"7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3"} Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.630592 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e4c9878-b19c-4b05-8962-d42e4f0c35f7","Type":"ContainerStarted","Data":"866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a"} Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.639996 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerStarted","Data":"2ada0d56da9d21d39a9a7417537233e0a1bd87a83230ed3f85ec05af505aa76e"} Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.676183 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.676165544 podStartE2EDuration="2.676165544s" podCreationTimestamp="2026-01-26 16:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:33.671601545 +0000 UTC m=+1448.832873814" watchObservedRunningTime="2026-01-26 16:29:33.676165544 +0000 UTC m=+1448.837437813" Jan 26 16:29:33 crc kubenswrapper[4680]: I0126 16:29:33.925208 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ckt42"] Jan 26 16:29:34 crc kubenswrapper[4680]: I0126 16:29:34.652545 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ckt42" event={"ID":"8236f9fb-94da-4bd0-8f13-e2ca69b30db5","Type":"ContainerStarted","Data":"ae551eabb13924844bd11aea1b2a269b000f951eea87b1cc39f9e2300fb7d547"} Jan 26 16:29:34 crc kubenswrapper[4680]: I0126 16:29:34.652876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ckt42" event={"ID":"8236f9fb-94da-4bd0-8f13-e2ca69b30db5","Type":"ContainerStarted","Data":"619766144edb4dc1161266ac6feb256c514d2939f878b10af7cc074714447115"} Jan 26 16:29:34 crc kubenswrapper[4680]: I0126 16:29:34.670701 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ckt42" podStartSLOduration=2.670670428 podStartE2EDuration="2.670670428s" podCreationTimestamp="2026-01-26 16:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:34.667447627 +0000 UTC m=+1449.828719896" watchObservedRunningTime="2026-01-26 16:29:34.670670428 +0000 UTC m=+1449.831942697" Jan 26 16:29:34 crc kubenswrapper[4680]: I0126 16:29:34.996280 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.088410 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5475b7678f-tjq4z"] Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.088684 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerName="dnsmasq-dns" containerID="cri-o://af29f1d58f380e28aef2d0c7b19edf5065e115b061ba4684eddeabd074429adf" gracePeriod=10 Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.663600 4680 generic.go:334] "Generic (PLEG): container finished" podID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerID="af29f1d58f380e28aef2d0c7b19edf5065e115b061ba4684eddeabd074429adf" exitCode=0 Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.663677 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" event={"ID":"e6b7a2ed-0d0a-4585-8684-fd407666efa9","Type":"ContainerDied","Data":"af29f1d58f380e28aef2d0c7b19edf5065e115b061ba4684eddeabd074429adf"} Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.667233 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerStarted","Data":"3a623efb0af1374fdca67d81a38540e91f10069799a55693291db386ef4a0f35"} Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.667486 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:29:35 crc kubenswrapper[4680]: I0126 16:29:35.700088 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7051668639999997 podStartE2EDuration="5.700045288s" podCreationTimestamp="2026-01-26 16:29:30 +0000 UTC" firstStartedPulling="2026-01-26 16:29:31.520965882 +0000 UTC m=+1446.682238151" lastFinishedPulling="2026-01-26 16:29:34.515844306 +0000 UTC m=+1449.677116575" observedRunningTime="2026-01-26 16:29:35.689307034 +0000 UTC m=+1450.850579303" watchObservedRunningTime="2026-01-26 16:29:35.700045288 +0000 UTC m=+1450.861317557" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.164147 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.241659 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-config\") pod \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.242049 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdxrg\" (UniqueName: \"kubernetes.io/projected/e6b7a2ed-0d0a-4585-8684-fd407666efa9-kube-api-access-xdxrg\") pod \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.242109 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-nb\") pod \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.242146 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-swift-storage-0\") pod \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.242263 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-sb\") pod \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.242323 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-svc\") pod \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\" (UID: \"e6b7a2ed-0d0a-4585-8684-fd407666efa9\") " Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.272416 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b7a2ed-0d0a-4585-8684-fd407666efa9-kube-api-access-xdxrg" (OuterVolumeSpecName: "kube-api-access-xdxrg") pod "e6b7a2ed-0d0a-4585-8684-fd407666efa9" (UID: "e6b7a2ed-0d0a-4585-8684-fd407666efa9"). InnerVolumeSpecName "kube-api-access-xdxrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.314056 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-config" (OuterVolumeSpecName: "config") pod "e6b7a2ed-0d0a-4585-8684-fd407666efa9" (UID: "e6b7a2ed-0d0a-4585-8684-fd407666efa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.325650 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6b7a2ed-0d0a-4585-8684-fd407666efa9" (UID: "e6b7a2ed-0d0a-4585-8684-fd407666efa9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.343794 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.343828 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.343839 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdxrg\" (UniqueName: \"kubernetes.io/projected/e6b7a2ed-0d0a-4585-8684-fd407666efa9-kube-api-access-xdxrg\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.346948 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6b7a2ed-0d0a-4585-8684-fd407666efa9" (UID: "e6b7a2ed-0d0a-4585-8684-fd407666efa9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.353298 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6b7a2ed-0d0a-4585-8684-fd407666efa9" (UID: "e6b7a2ed-0d0a-4585-8684-fd407666efa9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.374710 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6b7a2ed-0d0a-4585-8684-fd407666efa9" (UID: "e6b7a2ed-0d0a-4585-8684-fd407666efa9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.445751 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.445789 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.445801 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b7a2ed-0d0a-4585-8684-fd407666efa9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.677810 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" event={"ID":"e6b7a2ed-0d0a-4585-8684-fd407666efa9","Type":"ContainerDied","Data":"6ba9b58f730a3b236ec3290bffa2fc6cc2c9c8e60200f6db39ac3adea2430669"} Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.677898 4680 scope.go:117] "RemoveContainer" containerID="af29f1d58f380e28aef2d0c7b19edf5065e115b061ba4684eddeabd074429adf" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.677831 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5475b7678f-tjq4z" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.702835 4680 scope.go:117] "RemoveContainer" containerID="c251b671eb4b7cfba182b9435b8ebde52f9c2c60863138b5a699809b6bd45c46" Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.721338 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5475b7678f-tjq4z"] Jan 26 16:29:36 crc kubenswrapper[4680]: I0126 16:29:36.730084 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5475b7678f-tjq4z"] Jan 26 16:29:37 crc kubenswrapper[4680]: I0126 16:29:37.180097 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" path="/var/lib/kubelet/pods/e6b7a2ed-0d0a-4585-8684-fd407666efa9/volumes" Jan 26 16:29:40 crc kubenswrapper[4680]: I0126 16:29:40.717911 4680 generic.go:334] "Generic (PLEG): container finished" podID="8236f9fb-94da-4bd0-8f13-e2ca69b30db5" containerID="ae551eabb13924844bd11aea1b2a269b000f951eea87b1cc39f9e2300fb7d547" exitCode=0 Jan 26 16:29:40 crc kubenswrapper[4680]: I0126 16:29:40.718123 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ckt42" event={"ID":"8236f9fb-94da-4bd0-8f13-e2ca69b30db5","Type":"ContainerDied","Data":"ae551eabb13924844bd11aea1b2a269b000f951eea87b1cc39f9e2300fb7d547"} Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.021859 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.022209 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.155370 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.167414 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.173450 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.196772 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.372359 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-config-data\") pod \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.373360 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-combined-ca-bundle\") pod \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.373789 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2ll5\" (UniqueName: \"kubernetes.io/projected/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-kube-api-access-w2ll5\") pod \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.374194 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-scripts\") pod \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\" (UID: \"8236f9fb-94da-4bd0-8f13-e2ca69b30db5\") " Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.378420 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-kube-api-access-w2ll5" (OuterVolumeSpecName: "kube-api-access-w2ll5") pod "8236f9fb-94da-4bd0-8f13-e2ca69b30db5" (UID: "8236f9fb-94da-4bd0-8f13-e2ca69b30db5"). InnerVolumeSpecName "kube-api-access-w2ll5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.391231 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-scripts" (OuterVolumeSpecName: "scripts") pod "8236f9fb-94da-4bd0-8f13-e2ca69b30db5" (UID: "8236f9fb-94da-4bd0-8f13-e2ca69b30db5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.403244 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-config-data" (OuterVolumeSpecName: "config-data") pod "8236f9fb-94da-4bd0-8f13-e2ca69b30db5" (UID: "8236f9fb-94da-4bd0-8f13-e2ca69b30db5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.404252 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8236f9fb-94da-4bd0-8f13-e2ca69b30db5" (UID: "8236f9fb-94da-4bd0-8f13-e2ca69b30db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.476114 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2ll5\" (UniqueName: \"kubernetes.io/projected/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-kube-api-access-w2ll5\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.476147 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.476157 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.476168 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8236f9fb-94da-4bd0-8f13-e2ca69b30db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.751353 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ckt42" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.751670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ckt42" event={"ID":"8236f9fb-94da-4bd0-8f13-e2ca69b30db5","Type":"ContainerDied","Data":"619766144edb4dc1161266ac6feb256c514d2939f878b10af7cc074714447115"} Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.751950 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619766144edb4dc1161266ac6feb256c514d2939f878b10af7cc074714447115" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.785453 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.924133 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.924398 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-log" containerID="cri-o://866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a" gracePeriod=30 Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.924445 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-api" containerID="cri-o://7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3" gracePeriod=30 Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.938022 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": EOF" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.938902 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": EOF" Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.967216 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.967884 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1d74302a-11cc-424f-b8d0-223ccb204523" containerName="nova-scheduler-scheduler" containerID="cri-o://5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36" gracePeriod=30 Jan 26 16:29:42 crc kubenswrapper[4680]: I0126 16:29:42.988726 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:43 crc kubenswrapper[4680]: I0126 16:29:43.764889 4680 generic.go:334] "Generic (PLEG): container finished" podID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerID="866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a" exitCode=143 Jan 26 16:29:43 crc kubenswrapper[4680]: I0126 16:29:43.764964 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e4c9878-b19c-4b05-8962-d42e4f0c35f7","Type":"ContainerDied","Data":"866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a"} Jan 26 16:29:44 crc kubenswrapper[4680]: I0126 16:29:44.800959 4680 generic.go:334] "Generic (PLEG): container finished" podID="1d74302a-11cc-424f-b8d0-223ccb204523" containerID="5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36" exitCode=0 Jan 26 16:29:44 crc kubenswrapper[4680]: I0126 16:29:44.801404 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-log" containerID="cri-o://19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd" gracePeriod=30 Jan 26 16:29:44 crc kubenswrapper[4680]: I0126 16:29:44.801043 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d74302a-11cc-424f-b8d0-223ccb204523","Type":"ContainerDied","Data":"5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36"} Jan 26 16:29:44 crc kubenswrapper[4680]: I0126 16:29:44.803062 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-metadata" containerID="cri-o://6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b" gracePeriod=30 Jan 26 16:29:44 crc kubenswrapper[4680]: E0126 16:29:44.835333 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36 is running failed: container process not found" containerID="5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:29:44 crc kubenswrapper[4680]: E0126 16:29:44.835811 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36 is running failed: container process not found" containerID="5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:29:44 crc kubenswrapper[4680]: E0126 16:29:44.836297 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36 is running failed: container process not found" containerID="5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:29:44 crc kubenswrapper[4680]: E0126 16:29:44.836337 4680 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1d74302a-11cc-424f-b8d0-223ccb204523" containerName="nova-scheduler-scheduler" Jan 26 16:29:44 crc kubenswrapper[4680]: I0126 16:29:44.994884 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.131486 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-config-data\") pod \"1d74302a-11cc-424f-b8d0-223ccb204523\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.131778 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-combined-ca-bundle\") pod \"1d74302a-11cc-424f-b8d0-223ccb204523\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.131893 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8c4m\" (UniqueName: \"kubernetes.io/projected/1d74302a-11cc-424f-b8d0-223ccb204523-kube-api-access-t8c4m\") pod \"1d74302a-11cc-424f-b8d0-223ccb204523\" (UID: \"1d74302a-11cc-424f-b8d0-223ccb204523\") " Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.137125 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d74302a-11cc-424f-b8d0-223ccb204523-kube-api-access-t8c4m" (OuterVolumeSpecName: "kube-api-access-t8c4m") pod "1d74302a-11cc-424f-b8d0-223ccb204523" (UID: "1d74302a-11cc-424f-b8d0-223ccb204523"). InnerVolumeSpecName "kube-api-access-t8c4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.159938 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-config-data" (OuterVolumeSpecName: "config-data") pod "1d74302a-11cc-424f-b8d0-223ccb204523" (UID: "1d74302a-11cc-424f-b8d0-223ccb204523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.166402 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d74302a-11cc-424f-b8d0-223ccb204523" (UID: "1d74302a-11cc-424f-b8d0-223ccb204523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.234364 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8c4m\" (UniqueName: \"kubernetes.io/projected/1d74302a-11cc-424f-b8d0-223ccb204523-kube-api-access-t8c4m\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.234709 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.234720 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74302a-11cc-424f-b8d0-223ccb204523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.811427 4680 generic.go:334] "Generic (PLEG): container finished" podID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerID="19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd" exitCode=143 Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.811509 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dfa4211-539a-410c-b22a-f0aa33bd2ea2","Type":"ContainerDied","Data":"19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd"} Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.814326 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d74302a-11cc-424f-b8d0-223ccb204523","Type":"ContainerDied","Data":"b198e72ca22dd34c4b2b87c3214ecb9a90ae0c9fb905ff25fd4f0392cd669685"} Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.814405 4680 scope.go:117] "RemoveContainer" containerID="5e9fc02791ad91fba5b9fb7d5e3489a41bc0c98583f5f4d123f4b1c7301b1d36" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.814503 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.840986 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.852204 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.862671 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:45 crc kubenswrapper[4680]: E0126 16:29:45.863178 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerName="dnsmasq-dns" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863208 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerName="dnsmasq-dns" Jan 26 16:29:45 crc kubenswrapper[4680]: E0126 16:29:45.863234 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerName="init" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863241 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerName="init" Jan 26 16:29:45 crc kubenswrapper[4680]: E0126 16:29:45.863250 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8236f9fb-94da-4bd0-8f13-e2ca69b30db5" containerName="nova-manage" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863256 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8236f9fb-94da-4bd0-8f13-e2ca69b30db5" containerName="nova-manage" Jan 26 16:29:45 crc kubenswrapper[4680]: E0126 16:29:45.863280 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d74302a-11cc-424f-b8d0-223ccb204523" containerName="nova-scheduler-scheduler" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863299 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d74302a-11cc-424f-b8d0-223ccb204523" containerName="nova-scheduler-scheduler" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863528 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d74302a-11cc-424f-b8d0-223ccb204523" containerName="nova-scheduler-scheduler" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863548 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8236f9fb-94da-4bd0-8f13-e2ca69b30db5" containerName="nova-manage" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.863576 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b7a2ed-0d0a-4585-8684-fd407666efa9" containerName="dnsmasq-dns" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.864445 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.869789 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 16:29:45 crc kubenswrapper[4680]: I0126 16:29:45.876320 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.049942 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-config-data\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.049991 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.050083 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zp77\" (UniqueName: \"kubernetes.io/projected/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-kube-api-access-8zp77\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.151598 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-config-data\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.151668 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.151756 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zp77\" (UniqueName: \"kubernetes.io/projected/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-kube-api-access-8zp77\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.157747 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.169900 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zp77\" (UniqueName: \"kubernetes.io/projected/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-kube-api-access-8zp77\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.170382 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc-config-data\") pod \"nova-scheduler-0\" (UID: \"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc\") " pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.190332 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.657563 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.824987 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc","Type":"ContainerStarted","Data":"2ea874470ab9155860ad114f03f91fec4b065daecd9d5591ff187aea50fb5926"} Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.980491 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.980548 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.980592 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.981364 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4fcfc1b4abf63ee958fe902223a0c398b190bd8c8128fbc0a7b39068c18c50a"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:29:46 crc kubenswrapper[4680]: I0126 16:29:46.981418 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://b4fcfc1b4abf63ee958fe902223a0c398b190bd8c8128fbc0a7b39068c18c50a" gracePeriod=600 Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.184682 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d74302a-11cc-424f-b8d0-223ccb204523" path="/var/lib/kubelet/pods/1d74302a-11cc-424f-b8d0-223ccb204523/volumes" Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.851945 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4a7c92f-b8d8-4c49-96a9-6d0e05752ffc","Type":"ContainerStarted","Data":"683fa5a8d29417cf8e3b462af818673858ca5cd6a09f817174be66c107e23c30"} Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.864631 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="b4fcfc1b4abf63ee958fe902223a0c398b190bd8c8128fbc0a7b39068c18c50a" exitCode=0 Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.864682 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"b4fcfc1b4abf63ee958fe902223a0c398b190bd8c8128fbc0a7b39068c18c50a"} Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.864717 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7"} Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.864738 4680 scope.go:117] "RemoveContainer" containerID="30efb2e6cfd89156d3b5b947e16c8c7445b6d65d474e4ed3ab4ec65fec606211" Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.884494 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.884338095 podStartE2EDuration="2.884338095s" podCreationTimestamp="2026-01-26 16:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:47.872607124 +0000 UTC m=+1463.033879393" watchObservedRunningTime="2026-01-26 16:29:47.884338095 +0000 UTC m=+1463.045610364" Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.943181 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": read tcp 10.217.0.2:52048->10.217.0.211:8775: read: connection reset by peer" Jan 26 16:29:47 crc kubenswrapper[4680]: I0126 16:29:47.943601 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": read tcp 10.217.0.2:52062->10.217.0.211:8775: read: connection reset by peer" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.527272 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.707327 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-combined-ca-bundle\") pod \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.707447 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrd4b\" (UniqueName: \"kubernetes.io/projected/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-kube-api-access-wrd4b\") pod \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.708499 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-logs\") pod \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.713106 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-logs" (OuterVolumeSpecName: "logs") pod "9dfa4211-539a-410c-b22a-f0aa33bd2ea2" (UID: "9dfa4211-539a-410c-b22a-f0aa33bd2ea2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.713207 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-nova-metadata-tls-certs\") pod \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.713463 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-config-data\") pod \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\" (UID: \"9dfa4211-539a-410c-b22a-f0aa33bd2ea2\") " Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.714482 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.714926 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-kube-api-access-wrd4b" (OuterVolumeSpecName: "kube-api-access-wrd4b") pod "9dfa4211-539a-410c-b22a-f0aa33bd2ea2" (UID: "9dfa4211-539a-410c-b22a-f0aa33bd2ea2"). InnerVolumeSpecName "kube-api-access-wrd4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.796436 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dfa4211-539a-410c-b22a-f0aa33bd2ea2" (UID: "9dfa4211-539a-410c-b22a-f0aa33bd2ea2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.822596 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.822630 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrd4b\" (UniqueName: \"kubernetes.io/projected/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-kube-api-access-wrd4b\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.823231 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-config-data" (OuterVolumeSpecName: "config-data") pod "9dfa4211-539a-410c-b22a-f0aa33bd2ea2" (UID: "9dfa4211-539a-410c-b22a-f0aa33bd2ea2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.869523 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.896714 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9dfa4211-539a-410c-b22a-f0aa33bd2ea2" (UID: "9dfa4211-539a-410c-b22a-f0aa33bd2ea2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.926013 4680 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.926045 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfa4211-539a-410c-b22a-f0aa33bd2ea2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.931648 4680 generic.go:334] "Generic (PLEG): container finished" podID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerID="6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b" exitCode=0 Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.931717 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dfa4211-539a-410c-b22a-f0aa33bd2ea2","Type":"ContainerDied","Data":"6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b"} Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.931749 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9dfa4211-539a-410c-b22a-f0aa33bd2ea2","Type":"ContainerDied","Data":"e6751d143c7d2e240d36a6fb5f4ddaef68bd2cb24500b47d0559d48a57ae4593"} Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.931762 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.931766 4680 scope.go:117] "RemoveContainer" containerID="6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.947333 4680 generic.go:334] "Generic (PLEG): container finished" podID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerID="7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3" exitCode=0 Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.947837 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.947910 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e4c9878-b19c-4b05-8962-d42e4f0c35f7","Type":"ContainerDied","Data":"7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3"} Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.947960 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e4c9878-b19c-4b05-8962-d42e4f0c35f7","Type":"ContainerDied","Data":"bfef7c07b2df5439252b64e0a46c6afd704bde29e17ec8f19483520e440529f3"} Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.977486 4680 scope.go:117] "RemoveContainer" containerID="19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd" Jan 26 16:29:48 crc kubenswrapper[4680]: I0126 16:29:48.985227 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.006277 4680 scope.go:117] "RemoveContainer" containerID="6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.014142 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.016513 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b\": container with ID starting with 6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b not found: ID does not exist" containerID="6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.016549 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b"} err="failed to get container status \"6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b\": rpc error: code = NotFound desc = could not find container \"6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b\": container with ID starting with 6f411afa997836ca3b450a3a4e1a99047c030d952c73215344806ba9e055f44b not found: ID does not exist" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.016575 4680 scope.go:117] "RemoveContainer" containerID="19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.021059 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd\": container with ID starting with 19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd not found: ID does not exist" containerID="19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.021149 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd"} err="failed to get container status \"19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd\": rpc error: code = NotFound desc = could not find container \"19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd\": container with ID starting with 19455bd7d9780ba8bfd12b4be22c0ed7d9282b8ace5227d7e08704ef4aeb71fd not found: ID does not exist" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.021175 4680 scope.go:117] "RemoveContainer" containerID="7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.027558 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.027781 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-config-data\") pod \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.027821 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-public-tls-certs\") pod \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.027845 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shgfm\" (UniqueName: \"kubernetes.io/projected/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-kube-api-access-shgfm\") pod \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.027925 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-combined-ca-bundle\") pod \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.027974 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-logs\") pod \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.028006 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-internal-tls-certs\") pod \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\" (UID: \"0e4c9878-b19c-4b05-8962-d42e4f0c35f7\") " Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.028956 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-logs" (OuterVolumeSpecName: "logs") pod "0e4c9878-b19c-4b05-8962-d42e4f0c35f7" (UID: "0e4c9878-b19c-4b05-8962-d42e4f0c35f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.038686 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-kube-api-access-shgfm" (OuterVolumeSpecName: "kube-api-access-shgfm") pod "0e4c9878-b19c-4b05-8962-d42e4f0c35f7" (UID: "0e4c9878-b19c-4b05-8962-d42e4f0c35f7"). InnerVolumeSpecName "kube-api-access-shgfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.044185 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-log" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044216 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-log" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.044236 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-metadata" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044242 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-metadata" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.044254 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-log" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044260 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-log" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.044277 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-api" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044282 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-api" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044578 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-metadata" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044593 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" containerName="nova-metadata-log" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044606 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-api" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.044614 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" containerName="nova-api-log" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.045569 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.045644 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.050448 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.050710 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.078384 4680 scope.go:117] "RemoveContainer" containerID="866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.116355 4680 scope.go:117] "RemoveContainer" containerID="7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.117009 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3\": container with ID starting with 7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3 not found: ID does not exist" containerID="7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.117039 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3"} err="failed to get container status \"7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3\": rpc error: code = NotFound desc = could not find container \"7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3\": container with ID starting with 7fe2cd4eaeda3cc6d5e0842fbab3434e42c63cf08c76b3654c0ad9acea026ec3 not found: ID does not exist" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.117085 4680 scope.go:117] "RemoveContainer" containerID="866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a" Jan 26 16:29:49 crc kubenswrapper[4680]: E0126 16:29:49.117367 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a\": container with ID starting with 866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a not found: ID does not exist" containerID="866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.117400 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a"} err="failed to get container status \"866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a\": rpc error: code = NotFound desc = could not find container \"866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a\": container with ID starting with 866965665f7d16260c1f33551c8b1b6b35f14c46989eb6cbf331d13c44e05c5a not found: ID does not exist" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.123628 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e4c9878-b19c-4b05-8962-d42e4f0c35f7" (UID: "0e4c9878-b19c-4b05-8962-d42e4f0c35f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.131410 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwswx\" (UniqueName: \"kubernetes.io/projected/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-kube-api-access-jwswx\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.131735 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-config-data\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.131869 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.132007 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.132248 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-logs\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.132419 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.132535 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.132618 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shgfm\" (UniqueName: \"kubernetes.io/projected/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-kube-api-access-shgfm\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.140029 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0e4c9878-b19c-4b05-8962-d42e4f0c35f7" (UID: "0e4c9878-b19c-4b05-8962-d42e4f0c35f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.145265 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-config-data" (OuterVolumeSpecName: "config-data") pod "0e4c9878-b19c-4b05-8962-d42e4f0c35f7" (UID: "0e4c9878-b19c-4b05-8962-d42e4f0c35f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.150344 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e4c9878-b19c-4b05-8962-d42e4f0c35f7" (UID: "0e4c9878-b19c-4b05-8962-d42e4f0c35f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.185241 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfa4211-539a-410c-b22a-f0aa33bd2ea2" path="/var/lib/kubelet/pods/9dfa4211-539a-410c-b22a-f0aa33bd2ea2/volumes" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235011 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwswx\" (UniqueName: \"kubernetes.io/projected/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-kube-api-access-jwswx\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235132 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-config-data\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235179 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235213 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235289 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-logs\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235374 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235391 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235404 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e4c9878-b19c-4b05-8962-d42e4f0c35f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.235858 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-logs\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.241687 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-config-data\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.242951 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.250679 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.262083 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwswx\" (UniqueName: \"kubernetes.io/projected/91d568ce-4c36-4722-bd9f-f3ad544a0e8d-kube-api-access-jwswx\") pod \"nova-metadata-0\" (UID: \"91d568ce-4c36-4722-bd9f-f3ad544a0e8d\") " pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.290138 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.306564 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.333800 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.342303 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.352033 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.352243 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.352525 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.364136 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.392441 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.481939 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl728\" (UniqueName: \"kubernetes.io/projected/79d34a97-2f10-43bb-a32a-d09647e7986d-kube-api-access-tl728\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.482004 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.482035 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.482090 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d34a97-2f10-43bb-a32a-d09647e7986d-logs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.482182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-config-data\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.482281 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-public-tls-certs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.583426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl728\" (UniqueName: \"kubernetes.io/projected/79d34a97-2f10-43bb-a32a-d09647e7986d-kube-api-access-tl728\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.583648 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.583674 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.583701 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d34a97-2f10-43bb-a32a-d09647e7986d-logs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.583775 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-config-data\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.583807 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-public-tls-certs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.584413 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d34a97-2f10-43bb-a32a-d09647e7986d-logs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.591833 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.593521 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-config-data\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.600554 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.602564 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d34a97-2f10-43bb-a32a-d09647e7986d-public-tls-certs\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.606667 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl728\" (UniqueName: \"kubernetes.io/projected/79d34a97-2f10-43bb-a32a-d09647e7986d-kube-api-access-tl728\") pod \"nova-api-0\" (UID: \"79d34a97-2f10-43bb-a32a-d09647e7986d\") " pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.734103 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:29:49 crc kubenswrapper[4680]: I0126 16:29:49.965494 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:29:49 crc kubenswrapper[4680]: W0126 16:29:49.972368 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91d568ce_4c36_4722_bd9f_f3ad544a0e8d.slice/crio-3c3e34767b360bf578a7cc5fc375f8351b8af49f82b669c73ce9d98504888a2e WatchSource:0}: Error finding container 3c3e34767b360bf578a7cc5fc375f8351b8af49f82b669c73ce9d98504888a2e: Status 404 returned error can't find the container with id 3c3e34767b360bf578a7cc5fc375f8351b8af49f82b669c73ce9d98504888a2e Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.232679 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:29:50 crc kubenswrapper[4680]: W0126 16:29:50.257447 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d34a97_2f10_43bb_a32a_d09647e7986d.slice/crio-6c5700ba79d7eff6ad4d869b5a91ea8c147e233400203d18fe2cd8c48d86c895 WatchSource:0}: Error finding container 6c5700ba79d7eff6ad4d869b5a91ea8c147e233400203d18fe2cd8c48d86c895: Status 404 returned error can't find the container with id 6c5700ba79d7eff6ad4d869b5a91ea8c147e233400203d18fe2cd8c48d86c895 Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.984107 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91d568ce-4c36-4722-bd9f-f3ad544a0e8d","Type":"ContainerStarted","Data":"2b5737cd872c3ea5064c39decd359317c3bd0af63f8e84afedfd4beb76659414"} Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.984149 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91d568ce-4c36-4722-bd9f-f3ad544a0e8d","Type":"ContainerStarted","Data":"7468732fb1e8ddd7d870cc76b8c2c2e8226acc7da8bf249d3eb53d3bc9d52fc2"} Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.984162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91d568ce-4c36-4722-bd9f-f3ad544a0e8d","Type":"ContainerStarted","Data":"3c3e34767b360bf578a7cc5fc375f8351b8af49f82b669c73ce9d98504888a2e"} Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.990630 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d34a97-2f10-43bb-a32a-d09647e7986d","Type":"ContainerStarted","Data":"56fe103f049b8b4564cd132edc5d5b6b681c8d2da7b8537f41622e7586db8034"} Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.990676 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d34a97-2f10-43bb-a32a-d09647e7986d","Type":"ContainerStarted","Data":"3f20b355f4c06afe67fdb68daa19146d8d4432dcb73510f48602683ca3c4b943"} Jan 26 16:29:50 crc kubenswrapper[4680]: I0126 16:29:50.990689 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d34a97-2f10-43bb-a32a-d09647e7986d","Type":"ContainerStarted","Data":"6c5700ba79d7eff6ad4d869b5a91ea8c147e233400203d18fe2cd8c48d86c895"} Jan 26 16:29:51 crc kubenswrapper[4680]: I0126 16:29:51.013134 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.013114771 podStartE2EDuration="3.013114771s" podCreationTimestamp="2026-01-26 16:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:51.001671168 +0000 UTC m=+1466.162943447" watchObservedRunningTime="2026-01-26 16:29:51.013114771 +0000 UTC m=+1466.174387040" Jan 26 16:29:51 crc kubenswrapper[4680]: I0126 16:29:51.023228 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.023203966 podStartE2EDuration="2.023203966s" podCreationTimestamp="2026-01-26 16:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:29:51.020286753 +0000 UTC m=+1466.181559032" watchObservedRunningTime="2026-01-26 16:29:51.023203966 +0000 UTC m=+1466.184476235" Jan 26 16:29:51 crc kubenswrapper[4680]: I0126 16:29:51.184908 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4c9878-b19c-4b05-8962-d42e4f0c35f7" path="/var/lib/kubelet/pods/0e4c9878-b19c-4b05-8962-d42e4f0c35f7/volumes" Jan 26 16:29:51 crc kubenswrapper[4680]: I0126 16:29:51.201740 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 16:29:54 crc kubenswrapper[4680]: I0126 16:29:54.393136 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:29:54 crc kubenswrapper[4680]: I0126 16:29:54.393697 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:29:56 crc kubenswrapper[4680]: I0126 16:29:56.190863 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 16:29:56 crc kubenswrapper[4680]: I0126 16:29:56.215224 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 16:29:57 crc kubenswrapper[4680]: I0126 16:29:57.073791 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 16:29:59 crc kubenswrapper[4680]: I0126 16:29:59.393647 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:29:59 crc kubenswrapper[4680]: I0126 16:29:59.393708 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:29:59 crc kubenswrapper[4680]: I0126 16:29:59.734496 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:29:59 crc kubenswrapper[4680]: I0126 16:29:59.734543 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.170117 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw"] Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.171607 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.176520 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.185325 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw"] Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.185675 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.324454 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f629c8d9-c945-477d-ad2b-6c397e93b74c-secret-volume\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.324637 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f629c8d9-c945-477d-ad2b-6c397e93b74c-config-volume\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.324720 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwp89\" (UniqueName: \"kubernetes.io/projected/f629c8d9-c945-477d-ad2b-6c397e93b74c-kube-api-access-wwp89\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.410344 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="91d568ce-4c36-4722-bd9f-f3ad544a0e8d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.410475 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="91d568ce-4c36-4722-bd9f-f3ad544a0e8d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.429789 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwp89\" (UniqueName: \"kubernetes.io/projected/f629c8d9-c945-477d-ad2b-6c397e93b74c-kube-api-access-wwp89\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.429895 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f629c8d9-c945-477d-ad2b-6c397e93b74c-secret-volume\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.430146 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f629c8d9-c945-477d-ad2b-6c397e93b74c-config-volume\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.430995 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f629c8d9-c945-477d-ad2b-6c397e93b74c-config-volume\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.451154 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f629c8d9-c945-477d-ad2b-6c397e93b74c-secret-volume\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.453745 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwp89\" (UniqueName: \"kubernetes.io/projected/f629c8d9-c945-477d-ad2b-6c397e93b74c-kube-api-access-wwp89\") pod \"collect-profiles-29490750-29chw\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.495208 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.752482 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79d34a97-2f10-43bb-a32a-d09647e7986d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.752724 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79d34a97-2f10-43bb-a32a-d09647e7986d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:30:00 crc kubenswrapper[4680]: I0126 16:30:00.948252 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:30:01 crc kubenswrapper[4680]: I0126 16:30:01.037650 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw"] Jan 26 16:30:01 crc kubenswrapper[4680]: I0126 16:30:01.094361 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" event={"ID":"f629c8d9-c945-477d-ad2b-6c397e93b74c","Type":"ContainerStarted","Data":"25cd5e1ae2d52317e6cdd8d5ece6ba260b04349d4c2ccc6f9b560f8251a68e45"} Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.086571 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pksg4"] Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.088913 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.110878 4680 generic.go:334] "Generic (PLEG): container finished" podID="f629c8d9-c945-477d-ad2b-6c397e93b74c" containerID="7350ab58426ad93630b2e29e1128d5354aa7778fb09749bba8e30faa99efdbe8" exitCode=0 Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.110927 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" event={"ID":"f629c8d9-c945-477d-ad2b-6c397e93b74c","Type":"ContainerDied","Data":"7350ab58426ad93630b2e29e1128d5354aa7778fb09749bba8e30faa99efdbe8"} Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.123368 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pksg4"] Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.163921 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-catalog-content\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.163976 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4h52\" (UniqueName: \"kubernetes.io/projected/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-kube-api-access-l4h52\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.164052 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-utilities\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.266659 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-catalog-content\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.266709 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-catalog-content\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.266754 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4h52\" (UniqueName: \"kubernetes.io/projected/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-kube-api-access-l4h52\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.266902 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-utilities\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.268161 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-utilities\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.289830 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4h52\" (UniqueName: \"kubernetes.io/projected/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-kube-api-access-l4h52\") pod \"community-operators-pksg4\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:02 crc kubenswrapper[4680]: I0126 16:30:02.427545 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:03 crc kubenswrapper[4680]: I0126 16:30:03.834910 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pksg4"] Jan 26 16:30:03 crc kubenswrapper[4680]: W0126 16:30:03.838054 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod626dab37_c890_4cf0_a9c8_0ad0cc15cddd.slice/crio-902f60f44ebd3e0c624b5c5509c304bd0a67b7b5befacff7637bd2561cd858c3 WatchSource:0}: Error finding container 902f60f44ebd3e0c624b5c5509c304bd0a67b7b5befacff7637bd2561cd858c3: Status 404 returned error can't find the container with id 902f60f44ebd3e0c624b5c5509c304bd0a67b7b5befacff7637bd2561cd858c3 Jan 26 16:30:03 crc kubenswrapper[4680]: I0126 16:30:03.883503 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.015116 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f629c8d9-c945-477d-ad2b-6c397e93b74c-config-volume\") pod \"f629c8d9-c945-477d-ad2b-6c397e93b74c\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.015188 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwp89\" (UniqueName: \"kubernetes.io/projected/f629c8d9-c945-477d-ad2b-6c397e93b74c-kube-api-access-wwp89\") pod \"f629c8d9-c945-477d-ad2b-6c397e93b74c\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.015893 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f629c8d9-c945-477d-ad2b-6c397e93b74c-config-volume" (OuterVolumeSpecName: "config-volume") pod "f629c8d9-c945-477d-ad2b-6c397e93b74c" (UID: "f629c8d9-c945-477d-ad2b-6c397e93b74c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.016438 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f629c8d9-c945-477d-ad2b-6c397e93b74c-secret-volume\") pod \"f629c8d9-c945-477d-ad2b-6c397e93b74c\" (UID: \"f629c8d9-c945-477d-ad2b-6c397e93b74c\") " Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.017188 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f629c8d9-c945-477d-ad2b-6c397e93b74c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.021222 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f629c8d9-c945-477d-ad2b-6c397e93b74c-kube-api-access-wwp89" (OuterVolumeSpecName: "kube-api-access-wwp89") pod "f629c8d9-c945-477d-ad2b-6c397e93b74c" (UID: "f629c8d9-c945-477d-ad2b-6c397e93b74c"). InnerVolumeSpecName "kube-api-access-wwp89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.022134 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f629c8d9-c945-477d-ad2b-6c397e93b74c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f629c8d9-c945-477d-ad2b-6c397e93b74c" (UID: "f629c8d9-c945-477d-ad2b-6c397e93b74c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.119290 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f629c8d9-c945-477d-ad2b-6c397e93b74c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.119328 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwp89\" (UniqueName: \"kubernetes.io/projected/f629c8d9-c945-477d-ad2b-6c397e93b74c-kube-api-access-wwp89\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.132467 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.134176 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw" event={"ID":"f629c8d9-c945-477d-ad2b-6c397e93b74c","Type":"ContainerDied","Data":"25cd5e1ae2d52317e6cdd8d5ece6ba260b04349d4c2ccc6f9b560f8251a68e45"} Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.134227 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25cd5e1ae2d52317e6cdd8d5ece6ba260b04349d4c2ccc6f9b560f8251a68e45" Jan 26 16:30:04 crc kubenswrapper[4680]: I0126 16:30:04.135051 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerStarted","Data":"902f60f44ebd3e0c624b5c5509c304bd0a67b7b5befacff7637bd2561cd858c3"} Jan 26 16:30:05 crc kubenswrapper[4680]: I0126 16:30:05.147337 4680 generic.go:334] "Generic (PLEG): container finished" podID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerID="8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb" exitCode=0 Jan 26 16:30:05 crc kubenswrapper[4680]: I0126 16:30:05.147420 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerDied","Data":"8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb"} Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.183233 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerStarted","Data":"48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b"} Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.400137 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.413880 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.416574 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.431050 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrt8"] Jan 26 16:30:09 crc kubenswrapper[4680]: E0126 16:30:09.431517 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f629c8d9-c945-477d-ad2b-6c397e93b74c" containerName="collect-profiles" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.431533 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f629c8d9-c945-477d-ad2b-6c397e93b74c" containerName="collect-profiles" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.431724 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f629c8d9-c945-477d-ad2b-6c397e93b74c" containerName="collect-profiles" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.433092 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.496689 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrt8"] Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.545060 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-kube-api-access-22tv6\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.545134 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-catalog-content\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.545164 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-utilities\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.647283 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-kube-api-access-22tv6\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.647596 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-catalog-content\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.647690 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-utilities\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.648240 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-utilities\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.648980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-catalog-content\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:09 crc kubenswrapper[4680]: I0126 16:30:09.666278 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-kube-api-access-22tv6\") pod \"redhat-marketplace-5wrt8\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:09.745354 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:09.746005 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:09.747024 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:09.752464 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:09.753733 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:10.207820 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:10.214644 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:10.222267 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:30:12 crc kubenswrapper[4680]: I0126 16:30:11.467418 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:30:16 crc kubenswrapper[4680]: I0126 16:30:12.823993 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 16:30:16 crc kubenswrapper[4680]: I0126 16:30:12.824043 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 16:30:16 crc kubenswrapper[4680]: I0126 16:30:16.060367 4680 generic.go:334] "Generic (PLEG): container finished" podID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerID="48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b" exitCode=0 Jan 26 16:30:16 crc kubenswrapper[4680]: I0126 16:30:16.060425 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerDied","Data":"48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b"} Jan 26 16:30:16 crc kubenswrapper[4680]: I0126 16:30:16.748443 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrt8"] Jan 26 16:30:17 crc kubenswrapper[4680]: I0126 16:30:17.069921 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerStarted","Data":"9b64cd208aa24a9bda4b4da5e9b540bef2a99089e14cd03b6ed993ca5f580436"} Jan 26 16:30:18 crc kubenswrapper[4680]: I0126 16:30:18.079159 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerID="60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12" exitCode=0 Jan 26 16:30:18 crc kubenswrapper[4680]: I0126 16:30:18.079196 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerDied","Data":"60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12"} Jan 26 16:30:19 crc kubenswrapper[4680]: I0126 16:30:19.088854 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerStarted","Data":"b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3"} Jan 26 16:30:19 crc kubenswrapper[4680]: I0126 16:30:19.116480 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pksg4" podStartSLOduration=4.006591321 podStartE2EDuration="17.116462122s" podCreationTimestamp="2026-01-26 16:30:02 +0000 UTC" firstStartedPulling="2026-01-26 16:30:05.150652007 +0000 UTC m=+1480.311924276" lastFinishedPulling="2026-01-26 16:30:18.260522808 +0000 UTC m=+1493.421795077" observedRunningTime="2026-01-26 16:30:19.10934425 +0000 UTC m=+1494.270616539" watchObservedRunningTime="2026-01-26 16:30:19.116462122 +0000 UTC m=+1494.277734381" Jan 26 16:30:22 crc kubenswrapper[4680]: I0126 16:30:22.428196 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:22 crc kubenswrapper[4680]: I0126 16:30:22.428834 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:22 crc kubenswrapper[4680]: I0126 16:30:22.506532 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:30:23 crc kubenswrapper[4680]: I0126 16:30:23.123214 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerStarted","Data":"cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c"} Jan 26 16:30:23 crc kubenswrapper[4680]: I0126 16:30:23.480894 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pksg4" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="registry-server" probeResult="failure" output=< Jan 26 16:30:23 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:30:23 crc kubenswrapper[4680]: > Jan 26 16:30:23 crc kubenswrapper[4680]: I0126 16:30:23.713167 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:30:26 crc kubenswrapper[4680]: I0126 16:30:26.153078 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerDied","Data":"cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c"} Jan 26 16:30:26 crc kubenswrapper[4680]: I0126 16:30:26.153029 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerID="cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c" exitCode=0 Jan 26 16:30:29 crc kubenswrapper[4680]: I0126 16:30:29.183583 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerStarted","Data":"0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de"} Jan 26 16:30:29 crc kubenswrapper[4680]: I0126 16:30:29.224372 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wrt8" podStartSLOduration=10.149038886 podStartE2EDuration="20.224356032s" podCreationTimestamp="2026-01-26 16:30:09 +0000 UTC" firstStartedPulling="2026-01-26 16:30:18.094468011 +0000 UTC m=+1493.255740280" lastFinishedPulling="2026-01-26 16:30:28.169785157 +0000 UTC m=+1503.331057426" observedRunningTime="2026-01-26 16:30:29.223363534 +0000 UTC m=+1504.384635803" watchObservedRunningTime="2026-01-26 16:30:29.224356032 +0000 UTC m=+1504.385628291" Jan 26 16:30:29 crc kubenswrapper[4680]: I0126 16:30:29.754357 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:29 crc kubenswrapper[4680]: I0126 16:30:29.754399 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:30 crc kubenswrapper[4680]: I0126 16:30:30.746798 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="rabbitmq" containerID="cri-o://ef56e993d17a4c431f76847a8a65b409e91b9c019e0979c88bc3be1045841c34" gracePeriod=604793 Jan 26 16:30:30 crc kubenswrapper[4680]: I0126 16:30:30.796998 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5wrt8" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="registry-server" probeResult="failure" output=< Jan 26 16:30:30 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:30:30 crc kubenswrapper[4680]: > Jan 26 16:30:31 crc kubenswrapper[4680]: I0126 16:30:31.728666 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="rabbitmq" containerID="cri-o://a224c0526619df9fc42d61d452f7c54f4d1fb2f05991d9519790e835d4c18784" gracePeriod=604791 Jan 26 16:30:32 crc kubenswrapper[4680]: I0126 16:30:32.473485 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:32 crc kubenswrapper[4680]: I0126 16:30:32.522309 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:33 crc kubenswrapper[4680]: I0126 16:30:33.293668 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pksg4"] Jan 26 16:30:34 crc kubenswrapper[4680]: E0126 16:30:34.023336 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f9af68_9a32_4fc8_9c2e_552788c2ff89.slice/crio-conmon-cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:30:34 crc kubenswrapper[4680]: I0126 16:30:34.221128 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pksg4" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="registry-server" containerID="cri-o://b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3" gracePeriod=2 Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.061988 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.149395 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-catalog-content\") pod \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.149527 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-utilities\") pod \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.149612 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4h52\" (UniqueName: \"kubernetes.io/projected/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-kube-api-access-l4h52\") pod \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\" (UID: \"626dab37-c890-4cf0-a9c8-0ad0cc15cddd\") " Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.150462 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-utilities" (OuterVolumeSpecName: "utilities") pod "626dab37-c890-4cf0-a9c8-0ad0cc15cddd" (UID: "626dab37-c890-4cf0-a9c8-0ad0cc15cddd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.156951 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-kube-api-access-l4h52" (OuterVolumeSpecName: "kube-api-access-l4h52") pod "626dab37-c890-4cf0-a9c8-0ad0cc15cddd" (UID: "626dab37-c890-4cf0-a9c8-0ad0cc15cddd"). InnerVolumeSpecName "kube-api-access-l4h52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.229355 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "626dab37-c890-4cf0-a9c8-0ad0cc15cddd" (UID: "626dab37-c890-4cf0-a9c8-0ad0cc15cddd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.232377 4680 generic.go:334] "Generic (PLEG): container finished" podID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerID="b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3" exitCode=0 Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.232415 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerDied","Data":"b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3"} Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.232442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pksg4" event={"ID":"626dab37-c890-4cf0-a9c8-0ad0cc15cddd","Type":"ContainerDied","Data":"902f60f44ebd3e0c624b5c5509c304bd0a67b7b5befacff7637bd2561cd858c3"} Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.232459 4680 scope.go:117] "RemoveContainer" containerID="b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.232702 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pksg4" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.251616 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4h52\" (UniqueName: \"kubernetes.io/projected/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-kube-api-access-l4h52\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.251641 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.251653 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/626dab37-c890-4cf0-a9c8-0ad0cc15cddd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.278812 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pksg4"] Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.288687 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pksg4"] Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.315310 4680 scope.go:117] "RemoveContainer" containerID="48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.337339 4680 scope.go:117] "RemoveContainer" containerID="8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.391900 4680 scope.go:117] "RemoveContainer" containerID="b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3" Jan 26 16:30:35 crc kubenswrapper[4680]: E0126 16:30:35.393788 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3\": container with ID starting with b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3 not found: ID does not exist" containerID="b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.393835 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3"} err="failed to get container status \"b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3\": rpc error: code = NotFound desc = could not find container \"b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3\": container with ID starting with b2b7bbe5a0fdfbe14aabc0de1c935b035a32afb7593cc441385c73fe7a4917d3 not found: ID does not exist" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.393865 4680 scope.go:117] "RemoveContainer" containerID="48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b" Jan 26 16:30:35 crc kubenswrapper[4680]: E0126 16:30:35.394392 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b\": container with ID starting with 48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b not found: ID does not exist" containerID="48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.394418 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b"} err="failed to get container status \"48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b\": rpc error: code = NotFound desc = could not find container \"48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b\": container with ID starting with 48456d16fcd1448a4cc66994db402ddea037c82bc7fab17389ff8867e5c7c13b not found: ID does not exist" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.394435 4680 scope.go:117] "RemoveContainer" containerID="8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb" Jan 26 16:30:35 crc kubenswrapper[4680]: E0126 16:30:35.395112 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb\": container with ID starting with 8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb not found: ID does not exist" containerID="8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb" Jan 26 16:30:35 crc kubenswrapper[4680]: I0126 16:30:35.395141 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb"} err="failed to get container status \"8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb\": rpc error: code = NotFound desc = could not find container \"8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb\": container with ID starting with 8c2407746b7ef85da3163802f6efb564fbb1eba1994e49783e6a0b5ac64a92cb not found: ID does not exist" Jan 26 16:30:37 crc kubenswrapper[4680]: I0126 16:30:37.181907 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" path="/var/lib/kubelet/pods/626dab37-c890-4cf0-a9c8-0ad0cc15cddd/volumes" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.692536 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cf9d897bc-b5qgp"] Jan 26 16:30:38 crc kubenswrapper[4680]: E0126 16:30:38.692970 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="extract-content" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.692983 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="extract-content" Jan 26 16:30:38 crc kubenswrapper[4680]: E0126 16:30:38.693015 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="registry-server" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.693022 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="registry-server" Jan 26 16:30:38 crc kubenswrapper[4680]: E0126 16:30:38.693033 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="extract-utilities" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.693041 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="extract-utilities" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.693248 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="626dab37-c890-4cf0-a9c8-0ad0cc15cddd" containerName="registry-server" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.694277 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.696526 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720020 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-config\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720106 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720152 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720224 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9b9\" (UniqueName: \"kubernetes.io/projected/f7ddd112-a6cf-481a-97ff-0f50ed652b23-kube-api-access-9t9b9\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720278 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-svc\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720338 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.720357 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.726622 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf9d897bc-b5qgp"] Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.822855 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9b9\" (UniqueName: \"kubernetes.io/projected/f7ddd112-a6cf-481a-97ff-0f50ed652b23-kube-api-access-9t9b9\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.823358 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-svc\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.823436 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.823461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.823492 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-config\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.823517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.823547 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.824402 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.824423 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.824475 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-svc\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.824969 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-config\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.825008 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.825099 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:38 crc kubenswrapper[4680]: I0126 16:30:38.843318 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9b9\" (UniqueName: \"kubernetes.io/projected/f7ddd112-a6cf-481a-97ff-0f50ed652b23-kube-api-access-9t9b9\") pod \"dnsmasq-dns-5cf9d897bc-b5qgp\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:39 crc kubenswrapper[4680]: I0126 16:30:39.012181 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:39 crc kubenswrapper[4680]: I0126 16:30:39.872832 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:39 crc kubenswrapper[4680]: I0126 16:30:39.927298 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:40 crc kubenswrapper[4680]: I0126 16:30:40.095424 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf9d897bc-b5qgp"] Jan 26 16:30:40 crc kubenswrapper[4680]: I0126 16:30:40.172371 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.342571 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad417dd7-c38c-4934-a895-d0253bb03494" containerID="ef56e993d17a4c431f76847a8a65b409e91b9c019e0979c88bc3be1045841c34" exitCode=0 Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.342670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad417dd7-c38c-4934-a895-d0253bb03494","Type":"ContainerDied","Data":"ef56e993d17a4c431f76847a8a65b409e91b9c019e0979c88bc3be1045841c34"} Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.346031 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" event={"ID":"f7ddd112-a6cf-481a-97ff-0f50ed652b23","Type":"ContainerStarted","Data":"2c8a46267a94ec25143ea05cc14b2cac6cfcb4080d1372934a5870b6cb5c1c6d"} Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.347720 4680 generic.go:334] "Generic (PLEG): container finished" podID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerID="a224c0526619df9fc42d61d452f7c54f4d1fb2f05991d9519790e835d4c18784" exitCode=0 Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.348422 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b7b1e0b-5218-426e-aca1-76d49633811c","Type":"ContainerDied","Data":"a224c0526619df9fc42d61d452f7c54f4d1fb2f05991d9519790e835d4c18784"} Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.601442 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.664797 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b7b1e0b-5218-426e-aca1-76d49633811c-erlang-cookie-secret\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.664887 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.664939 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-plugins\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.664962 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-plugins-conf\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665000 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-tls\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665031 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-erlang-cookie\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665055 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b7b1e0b-5218-426e-aca1-76d49633811c-pod-info\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665156 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-server-conf\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665197 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-confd\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665257 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw4gv\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-kube-api-access-xw4gv\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.665403 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-config-data\") pod \"3b7b1e0b-5218-426e-aca1-76d49633811c\" (UID: \"3b7b1e0b-5218-426e-aca1-76d49633811c\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.667121 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.667590 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrt8"] Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.672542 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.676233 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-kube-api-access-xw4gv" (OuterVolumeSpecName: "kube-api-access-xw4gv") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "kube-api-access-xw4gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.678583 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.678996 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.679654 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3b7b1e0b-5218-426e-aca1-76d49633811c-pod-info" (OuterVolumeSpecName: "pod-info") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.682432 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7b1e0b-5218-426e-aca1-76d49633811c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.687642 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771355 4680 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b7b1e0b-5218-426e-aca1-76d49633811c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771688 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771701 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771712 4680 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771723 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771735 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771752 4680 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b7b1e0b-5218-426e-aca1-76d49633811c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.771764 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw4gv\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-kube-api-access-xw4gv\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.780158 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-server-conf" (OuterVolumeSpecName: "server-conf") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.787908 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-config-data" (OuterVolumeSpecName: "config-data") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.857077 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.866481 4680 scope.go:117] "RemoveContainer" containerID="a43e37fe910400592dcea87fbd4a4d925c823cfae992be4f3e8c3f5c81d03ee3" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.889953 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.890009 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:40.890021 4680 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b7b1e0b-5218-426e-aca1-76d49633811c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.041268 4680 scope.go:117] "RemoveContainer" containerID="a224c0526619df9fc42d61d452f7c54f4d1fb2f05991d9519790e835d4c18784" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.057186 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3b7b1e0b-5218-426e-aca1-76d49633811c" (UID: "3b7b1e0b-5218-426e-aca1-76d49633811c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.104229 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b7b1e0b-5218-426e-aca1-76d49633811c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.186778 4680 scope.go:117] "RemoveContainer" containerID="7054a2cc380f039ebb9edb2c5103ef606ae60293f81d2038e21f08e9df3efbc5" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.230425 4680 scope.go:117] "RemoveContainer" containerID="ee90a647e7142abaf7f0d28086f23814b1656aceec5771a48734472625d253e3" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.372000 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b7b1e0b-5218-426e-aca1-76d49633811c","Type":"ContainerDied","Data":"5ab71a6f154c7e0d61ee98f5008d41271e73eac4010ef0fcc61317c2bdcefa6a"} Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.372111 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.377358 4680 generic.go:334] "Generic (PLEG): container finished" podID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerID="ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6" exitCode=0 Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.377535 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wrt8" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="registry-server" containerID="cri-o://0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de" gracePeriod=2 Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.377904 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" event={"ID":"f7ddd112-a6cf-481a-97ff-0f50ed652b23","Type":"ContainerDied","Data":"ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6"} Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.494706 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.600227 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.623884 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:30:41 crc kubenswrapper[4680]: E0126 16:30:41.633409 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="setup-container" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.633441 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="setup-container" Jan 26 16:30:41 crc kubenswrapper[4680]: E0126 16:30:41.633464 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="rabbitmq" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.633471 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="rabbitmq" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.633678 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" containerName="rabbitmq" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.635267 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.637402 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.639335 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.639502 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.639679 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p6gcx" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.639811 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.644175 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.644326 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.663382 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.762094 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838317 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838651 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838677 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838717 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838758 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838778 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838833 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838849 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838877 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5rn\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-kube-api-access-6m5rn\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838908 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.838932 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940399 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmd8d\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-kube-api-access-kmd8d\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940443 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-plugins\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940478 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad417dd7-c38c-4934-a895-d0253bb03494-erlang-cookie-secret\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940530 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad417dd7-c38c-4934-a895-d0253bb03494-pod-info\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940545 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-tls\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940640 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-config-data\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940668 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-erlang-cookie\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940694 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-confd\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940781 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-plugins-conf\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.940797 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-server-conf\") pod \"ad417dd7-c38c-4934-a895-d0253bb03494\" (UID: \"ad417dd7-c38c-4934-a895-d0253bb03494\") " Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941103 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941134 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941191 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941208 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941238 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m5rn\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-kube-api-access-6m5rn\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941287 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941310 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941355 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941372 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.941407 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.942546 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.942809 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.942831 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.953060 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-kube-api-access-kmd8d" (OuterVolumeSpecName: "kube-api-access-kmd8d") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "kube-api-access-kmd8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.953415 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.956403 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.956494 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.960629 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.965722 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.971672 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.972424 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.974514 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.977899 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.986092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.990801 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ad417dd7-c38c-4934-a895-d0253bb03494-pod-info" (OuterVolumeSpecName: "pod-info") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.991256 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:30:41 crc kubenswrapper[4680]: I0126 16:30:41.991643 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.012911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m5rn\" (UniqueName: \"kubernetes.io/projected/cc818c5c-f9dc-46e9-b1d0-bca79fa6a985-kube-api-access-6m5rn\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.017454 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad417dd7-c38c-4934-a895-d0253bb03494-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.018838 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-config-data" (OuterVolumeSpecName: "config-data") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044326 4680 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044368 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmd8d\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-kube-api-access-kmd8d\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044379 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044390 4680 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad417dd7-c38c-4934-a895-d0253bb03494-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044403 4680 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad417dd7-c38c-4934-a895-d0253bb03494-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044438 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044451 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044461 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.044471 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.060653 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-server-conf" (OuterVolumeSpecName: "server-conf") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.072753 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985\") " pod="openstack/rabbitmq-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.076249 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.086836 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.137735 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ad417dd7-c38c-4934-a895-d0253bb03494" (UID: "ad417dd7-c38c-4934-a895-d0253bb03494"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.155982 4680 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad417dd7-c38c-4934-a895-d0253bb03494-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.156018 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.156031 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad417dd7-c38c-4934-a895-d0253bb03494-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.257696 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-utilities\") pod \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.257882 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-catalog-content\") pod \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.258030 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-kube-api-access-22tv6\") pod \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\" (UID: \"d7f9af68-9a32-4fc8-9c2e-552788c2ff89\") " Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.258557 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-utilities" (OuterVolumeSpecName: "utilities") pod "d7f9af68-9a32-4fc8-9c2e-552788c2ff89" (UID: "d7f9af68-9a32-4fc8-9c2e-552788c2ff89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.258989 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.280113 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-kube-api-access-22tv6" (OuterVolumeSpecName: "kube-api-access-22tv6") pod "d7f9af68-9a32-4fc8-9c2e-552788c2ff89" (UID: "d7f9af68-9a32-4fc8-9c2e-552788c2ff89"). InnerVolumeSpecName "kube-api-access-22tv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.287929 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7f9af68-9a32-4fc8-9c2e-552788c2ff89" (UID: "d7f9af68-9a32-4fc8-9c2e-552788c2ff89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.292472 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.360203 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.360235 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/d7f9af68-9a32-4fc8-9c2e-552788c2ff89-kube-api-access-22tv6\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.396339 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerID="0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de" exitCode=0 Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.396507 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerDied","Data":"0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de"} Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.396747 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wrt8" event={"ID":"d7f9af68-9a32-4fc8-9c2e-552788c2ff89","Type":"ContainerDied","Data":"9b64cd208aa24a9bda4b4da5e9b540bef2a99089e14cd03b6ed993ca5f580436"} Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.396602 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wrt8" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.396774 4680 scope.go:117] "RemoveContainer" containerID="0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.400250 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad417dd7-c38c-4934-a895-d0253bb03494","Type":"ContainerDied","Data":"f7ec228a1927a1a98103559e36930d05879265c6320b21245f43349f9b944a11"} Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.400337 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.434146 4680 scope.go:117] "RemoveContainer" containerID="cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.441578 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" event={"ID":"f7ddd112-a6cf-481a-97ff-0f50ed652b23","Type":"ContainerStarted","Data":"f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027"} Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.442653 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.485036 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.497522 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.498719 4680 scope.go:117] "RemoveContainer" containerID="60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.498833 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" podStartSLOduration=4.498809924 podStartE2EDuration="4.498809924s" podCreationTimestamp="2026-01-26 16:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:30:42.485856316 +0000 UTC m=+1517.647128585" watchObservedRunningTime="2026-01-26 16:30:42.498809924 +0000 UTC m=+1517.660082193" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533128 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.533557 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="extract-utilities" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533569 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="extract-utilities" Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.533633 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="setup-container" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533640 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="setup-container" Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.533654 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="extract-content" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533660 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="extract-content" Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.533690 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="rabbitmq" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533696 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="rabbitmq" Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.533709 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="registry-server" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533714 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="registry-server" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533881 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" containerName="registry-server" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.533892 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="rabbitmq" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.534903 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.538678 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xhs5p" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.539035 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.541252 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.541437 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.542109 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.542167 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.542351 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.542443 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrt8"] Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.571017 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wrt8"] Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.590228 4680 scope.go:117] "RemoveContainer" containerID="0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de" Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.592514 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de\": container with ID starting with 0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de not found: ID does not exist" containerID="0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.592547 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de"} err="failed to get container status \"0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de\": rpc error: code = NotFound desc = could not find container \"0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de\": container with ID starting with 0f4c2a28dd354b8d87e3e3fe61e76b335e6638eaa5e73fe401202302e33c27de not found: ID does not exist" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.592571 4680 scope.go:117] "RemoveContainer" containerID="cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.593722 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.596369 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c\": container with ID starting with cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c not found: ID does not exist" containerID="cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.596407 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c"} err="failed to get container status \"cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c\": rpc error: code = NotFound desc = could not find container \"cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c\": container with ID starting with cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c not found: ID does not exist" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.596435 4680 scope.go:117] "RemoveContainer" containerID="60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12" Jan 26 16:30:42 crc kubenswrapper[4680]: E0126 16:30:42.600635 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12\": container with ID starting with 60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12 not found: ID does not exist" containerID="60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.600682 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12"} err="failed to get container status \"60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12\": rpc error: code = NotFound desc = could not find container \"60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12\": container with ID starting with 60a81e48408531661eb46ca0e015cd0ac77bbefe141371d2aeba5ebe273dea12 not found: ID does not exist" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.600716 4680 scope.go:117] "RemoveContainer" containerID="ef56e993d17a4c431f76847a8a65b409e91b9c019e0979c88bc3be1045841c34" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.648247 4680 scope.go:117] "RemoveContainer" containerID="84ffc9794e476e25f8d2a669fe751a60e111aa3beb943ac132db59158c8a2961" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.672960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673469 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khmns\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-kube-api-access-khmns\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673552 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673632 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673738 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673823 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673888 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.673954 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.674095 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.674171 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.674246 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775529 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775607 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775667 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775774 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775818 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775849 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khmns\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-kube-api-access-khmns\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775951 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.775973 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.776762 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.776804 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.776963 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.777034 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.777334 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.777959 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.787622 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.788336 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.789704 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.793538 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.803797 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khmns\" (UniqueName: \"kubernetes.io/projected/87da0bc8-aff7-4bd8-afeb-cba14a6e906e-kube-api-access-khmns\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.826256 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"87da0bc8-aff7-4bd8-afeb-cba14a6e906e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.861214 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:30:42 crc kubenswrapper[4680]: I0126 16:30:42.917518 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:30:43 crc kubenswrapper[4680]: I0126 16:30:43.191379 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7b1e0b-5218-426e-aca1-76d49633811c" path="/var/lib/kubelet/pods/3b7b1e0b-5218-426e-aca1-76d49633811c/volumes" Jan 26 16:30:43 crc kubenswrapper[4680]: I0126 16:30:43.193006 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" path="/var/lib/kubelet/pods/ad417dd7-c38c-4934-a895-d0253bb03494/volumes" Jan 26 16:30:43 crc kubenswrapper[4680]: I0126 16:30:43.194391 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f9af68-9a32-4fc8-9c2e-552788c2ff89" path="/var/lib/kubelet/pods/d7f9af68-9a32-4fc8-9c2e-552788c2ff89/volumes" Jan 26 16:30:43 crc kubenswrapper[4680]: I0126 16:30:43.397233 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:30:43 crc kubenswrapper[4680]: W0126 16:30:43.402953 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87da0bc8_aff7_4bd8_afeb_cba14a6e906e.slice/crio-5d19e8b9b37cc3ed06e588e1ffaab58353c3b405b0cf327ab9bca12b5c9fa7e0 WatchSource:0}: Error finding container 5d19e8b9b37cc3ed06e588e1ffaab58353c3b405b0cf327ab9bca12b5c9fa7e0: Status 404 returned error can't find the container with id 5d19e8b9b37cc3ed06e588e1ffaab58353c3b405b0cf327ab9bca12b5c9fa7e0 Jan 26 16:30:43 crc kubenswrapper[4680]: I0126 16:30:43.451824 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985","Type":"ContainerStarted","Data":"c05757cb5e16052eee9e92d211b62884efd5126e26aedfffa9470d24fe7530a3"} Jan 26 16:30:43 crc kubenswrapper[4680]: I0126 16:30:43.455042 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87da0bc8-aff7-4bd8-afeb-cba14a6e906e","Type":"ContainerStarted","Data":"5d19e8b9b37cc3ed06e588e1ffaab58353c3b405b0cf327ab9bca12b5c9fa7e0"} Jan 26 16:30:44 crc kubenswrapper[4680]: E0126 16:30:44.248792 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f9af68_9a32_4fc8_9c2e_552788c2ff89.slice/crio-conmon-cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:30:44 crc kubenswrapper[4680]: I0126 16:30:44.468206 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985","Type":"ContainerStarted","Data":"a01607fb22fb2fb49eac5c19be5b53484659452ec6b4881098bb08d44b4805f2"} Jan 26 16:30:45 crc kubenswrapper[4680]: I0126 16:30:45.476489 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87da0bc8-aff7-4bd8-afeb-cba14a6e906e","Type":"ContainerStarted","Data":"2fb3bc1154ccc8f9aada2e0ea91b98f3ecad47dc09ad710dfc6187f5e0ed3c64"} Jan 26 16:30:45 crc kubenswrapper[4680]: I0126 16:30:45.490740 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ad417dd7-c38c-4934-a895-d0253bb03494" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: i/o timeout" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.013242 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.085815 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f686847-5rzkm"] Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.086550 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8f686847-5rzkm" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerName="dnsmasq-dns" containerID="cri-o://05672050f3386b5bffd5d30de43ddd32c63f44e136ca849a73e5cab96ba3be83" gracePeriod=10 Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.361341 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5885c9c559-k7v8b"] Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.368659 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.401823 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5885c9c559-k7v8b"] Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.503612 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-config\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.503691 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-dns-svc\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.503752 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-ovsdbserver-nb\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.503774 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tzcp\" (UniqueName: \"kubernetes.io/projected/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-kube-api-access-9tzcp\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.505318 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-dns-swift-storage-0\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.505386 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.505422 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-ovsdbserver-sb\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.564001 4680 generic.go:334] "Generic (PLEG): container finished" podID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerID="05672050f3386b5bffd5d30de43ddd32c63f44e136ca849a73e5cab96ba3be83" exitCode=0 Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.564037 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f686847-5rzkm" event={"ID":"324b761d-e4a0-47ba-99ee-f3a561ac07a3","Type":"ContainerDied","Data":"05672050f3386b5bffd5d30de43ddd32c63f44e136ca849a73e5cab96ba3be83"} Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614027 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-dns-swift-storage-0\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614184 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614240 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-ovsdbserver-sb\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614319 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-config\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614340 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-dns-svc\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614464 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-ovsdbserver-nb\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614479 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tzcp\" (UniqueName: \"kubernetes.io/projected/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-kube-api-access-9tzcp\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.614956 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-dns-swift-storage-0\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.615937 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-ovsdbserver-nb\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.615973 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-ovsdbserver-sb\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.616846 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-dns-svc\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.617430 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-config\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.617724 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.636938 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tzcp\" (UniqueName: \"kubernetes.io/projected/f1d24c0b-1efb-49fb-bc92-167fa6018ba9-kube-api-access-9tzcp\") pod \"dnsmasq-dns-5885c9c559-k7v8b\" (UID: \"f1d24c0b-1efb-49fb-bc92-167fa6018ba9\") " pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.686464 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.754909 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.834999 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-swift-storage-0\") pod \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.835163 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-sb\") pod \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.835200 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-svc\") pod \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.835265 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d9cn\" (UniqueName: \"kubernetes.io/projected/324b761d-e4a0-47ba-99ee-f3a561ac07a3-kube-api-access-8d9cn\") pod \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.835339 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-nb\") pod \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.835366 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-config\") pod \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\" (UID: \"324b761d-e4a0-47ba-99ee-f3a561ac07a3\") " Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.864785 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324b761d-e4a0-47ba-99ee-f3a561ac07a3-kube-api-access-8d9cn" (OuterVolumeSpecName: "kube-api-access-8d9cn") pod "324b761d-e4a0-47ba-99ee-f3a561ac07a3" (UID: "324b761d-e4a0-47ba-99ee-f3a561ac07a3"). InnerVolumeSpecName "kube-api-access-8d9cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.885564 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-config" (OuterVolumeSpecName: "config") pod "324b761d-e4a0-47ba-99ee-f3a561ac07a3" (UID: "324b761d-e4a0-47ba-99ee-f3a561ac07a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.902611 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "324b761d-e4a0-47ba-99ee-f3a561ac07a3" (UID: "324b761d-e4a0-47ba-99ee-f3a561ac07a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.938819 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d9cn\" (UniqueName: \"kubernetes.io/projected/324b761d-e4a0-47ba-99ee-f3a561ac07a3-kube-api-access-8d9cn\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.938847 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.938856 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.940119 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "324b761d-e4a0-47ba-99ee-f3a561ac07a3" (UID: "324b761d-e4a0-47ba-99ee-f3a561ac07a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:49 crc kubenswrapper[4680]: I0126 16:30:49.980621 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "324b761d-e4a0-47ba-99ee-f3a561ac07a3" (UID: "324b761d-e4a0-47ba-99ee-f3a561ac07a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.041103 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.041454 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.050304 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "324b761d-e4a0-47ba-99ee-f3a561ac07a3" (UID: "324b761d-e4a0-47ba-99ee-f3a561ac07a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.143018 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324b761d-e4a0-47ba-99ee-f3a561ac07a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.386015 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5885c9c559-k7v8b"] Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.573170 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f686847-5rzkm" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.573168 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f686847-5rzkm" event={"ID":"324b761d-e4a0-47ba-99ee-f3a561ac07a3","Type":"ContainerDied","Data":"14e7adce2bb6b88ebdcd1f245b1de7cd4119b41a21737bafe68103af000fcc98"} Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.573578 4680 scope.go:117] "RemoveContainer" containerID="05672050f3386b5bffd5d30de43ddd32c63f44e136ca849a73e5cab96ba3be83" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.574161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" event={"ID":"f1d24c0b-1efb-49fb-bc92-167fa6018ba9","Type":"ContainerStarted","Data":"0d793047ac2c038f2dbb1641b2556c55910c2e799652fc6692af837a244b11e0"} Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.597753 4680 scope.go:117] "RemoveContainer" containerID="ba2e4164c3c75a3c1736f3a136a4936b66d8c76359a8f9a008b16e795dbdaf98" Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.604853 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f686847-5rzkm"] Jan 26 16:30:50 crc kubenswrapper[4680]: I0126 16:30:50.614991 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8f686847-5rzkm"] Jan 26 16:30:51 crc kubenswrapper[4680]: I0126 16:30:51.180328 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" path="/var/lib/kubelet/pods/324b761d-e4a0-47ba-99ee-f3a561ac07a3/volumes" Jan 26 16:30:51 crc kubenswrapper[4680]: I0126 16:30:51.585513 4680 generic.go:334] "Generic (PLEG): container finished" podID="f1d24c0b-1efb-49fb-bc92-167fa6018ba9" containerID="2c31240d545f8177e93d97066f7fbb3c6acf7f4c674d38689935d2d64997ec07" exitCode=0 Jan 26 16:30:51 crc kubenswrapper[4680]: I0126 16:30:51.585555 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" event={"ID":"f1d24c0b-1efb-49fb-bc92-167fa6018ba9","Type":"ContainerDied","Data":"2c31240d545f8177e93d97066f7fbb3c6acf7f4c674d38689935d2d64997ec07"} Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.595287 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" event={"ID":"f1d24c0b-1efb-49fb-bc92-167fa6018ba9","Type":"ContainerStarted","Data":"fc44a2af0c2ba9e88ca0734b4e54333aa61282b31d9ee7772ec79149b197724e"} Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.595678 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.613053 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" podStartSLOduration=3.613031233 podStartE2EDuration="3.613031233s" podCreationTimestamp="2026-01-26 16:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:30:52.609571895 +0000 UTC m=+1527.770844164" watchObservedRunningTime="2026-01-26 16:30:52.613031233 +0000 UTC m=+1527.774303512" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.722590 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w5sr9"] Jan 26 16:30:52 crc kubenswrapper[4680]: E0126 16:30:52.722979 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerName="init" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.722995 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerName="init" Jan 26 16:30:52 crc kubenswrapper[4680]: E0126 16:30:52.723011 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerName="dnsmasq-dns" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.723017 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerName="dnsmasq-dns" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.723226 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b761d-e4a0-47ba-99ee-f3a561ac07a3" containerName="dnsmasq-dns" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.724708 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.748208 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5sr9"] Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.796980 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-utilities\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.797052 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56thc\" (UniqueName: \"kubernetes.io/projected/964cf7e9-1d9e-413d-92d3-8c304258505f-kube-api-access-56thc\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.797516 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-catalog-content\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.899545 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-catalog-content\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.899717 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-utilities\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.899750 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56thc\" (UniqueName: \"kubernetes.io/projected/964cf7e9-1d9e-413d-92d3-8c304258505f-kube-api-access-56thc\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.900049 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-catalog-content\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.900143 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-utilities\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:52 crc kubenswrapper[4680]: I0126 16:30:52.927197 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56thc\" (UniqueName: \"kubernetes.io/projected/964cf7e9-1d9e-413d-92d3-8c304258505f-kube-api-access-56thc\") pod \"certified-operators-w5sr9\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:53 crc kubenswrapper[4680]: I0126 16:30:53.060099 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:30:53 crc kubenswrapper[4680]: I0126 16:30:53.517140 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5sr9"] Jan 26 16:30:53 crc kubenswrapper[4680]: I0126 16:30:53.608995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5sr9" event={"ID":"964cf7e9-1d9e-413d-92d3-8c304258505f","Type":"ContainerStarted","Data":"cdd4ecaf02d81080f89ba40b54f100acbeff5bdab7e7691200b6f3d36fc1308a"} Jan 26 16:30:54 crc kubenswrapper[4680]: E0126 16:30:54.489686 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f9af68_9a32_4fc8_9c2e_552788c2ff89.slice/crio-conmon-cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:30:54 crc kubenswrapper[4680]: I0126 16:30:54.619187 4680 generic.go:334] "Generic (PLEG): container finished" podID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerID="870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48" exitCode=0 Jan 26 16:30:54 crc kubenswrapper[4680]: I0126 16:30:54.619236 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5sr9" event={"ID":"964cf7e9-1d9e-413d-92d3-8c304258505f","Type":"ContainerDied","Data":"870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48"} Jan 26 16:30:56 crc kubenswrapper[4680]: I0126 16:30:56.635578 4680 generic.go:334] "Generic (PLEG): container finished" podID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerID="f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489" exitCode=0 Jan 26 16:30:56 crc kubenswrapper[4680]: I0126 16:30:56.635785 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5sr9" event={"ID":"964cf7e9-1d9e-413d-92d3-8c304258505f","Type":"ContainerDied","Data":"f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489"} Jan 26 16:30:57 crc kubenswrapper[4680]: I0126 16:30:57.646380 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5sr9" event={"ID":"964cf7e9-1d9e-413d-92d3-8c304258505f","Type":"ContainerStarted","Data":"d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0"} Jan 26 16:30:57 crc kubenswrapper[4680]: I0126 16:30:57.672036 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w5sr9" podStartSLOduration=3.253022009 podStartE2EDuration="5.672014061s" podCreationTimestamp="2026-01-26 16:30:52 +0000 UTC" firstStartedPulling="2026-01-26 16:30:54.621736394 +0000 UTC m=+1529.783008663" lastFinishedPulling="2026-01-26 16:30:57.040728456 +0000 UTC m=+1532.202000715" observedRunningTime="2026-01-26 16:30:57.663898161 +0000 UTC m=+1532.825170450" watchObservedRunningTime="2026-01-26 16:30:57.672014061 +0000 UTC m=+1532.833286330" Jan 26 16:30:59 crc kubenswrapper[4680]: I0126 16:30:59.688344 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5885c9c559-k7v8b" Jan 26 16:30:59 crc kubenswrapper[4680]: I0126 16:30:59.786774 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf9d897bc-b5qgp"] Jan 26 16:30:59 crc kubenswrapper[4680]: I0126 16:30:59.790623 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerName="dnsmasq-dns" containerID="cri-o://f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027" gracePeriod=10 Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.670828 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.679101 4680 generic.go:334] "Generic (PLEG): container finished" podID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerID="f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027" exitCode=0 Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.679141 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" event={"ID":"f7ddd112-a6cf-481a-97ff-0f50ed652b23","Type":"ContainerDied","Data":"f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027"} Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.679168 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" event={"ID":"f7ddd112-a6cf-481a-97ff-0f50ed652b23","Type":"ContainerDied","Data":"2c8a46267a94ec25143ea05cc14b2cac6cfcb4080d1372934a5870b6cb5c1c6d"} Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.679185 4680 scope.go:117] "RemoveContainer" containerID="f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.679194 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf9d897bc-b5qgp" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.754570 4680 scope.go:117] "RemoveContainer" containerID="ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.787834 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-nb\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.788175 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-swift-storage-0\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.788280 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-openstack-edpm-ipam\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.788313 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-sb\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.788392 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-config\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.788421 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t9b9\" (UniqueName: \"kubernetes.io/projected/f7ddd112-a6cf-481a-97ff-0f50ed652b23-kube-api-access-9t9b9\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.788506 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-svc\") pod \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\" (UID: \"f7ddd112-a6cf-481a-97ff-0f50ed652b23\") " Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.822592 4680 scope.go:117] "RemoveContainer" containerID="f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.822588 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ddd112-a6cf-481a-97ff-0f50ed652b23-kube-api-access-9t9b9" (OuterVolumeSpecName: "kube-api-access-9t9b9") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "kube-api-access-9t9b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: E0126 16:31:00.827898 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027\": container with ID starting with f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027 not found: ID does not exist" containerID="f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.827943 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027"} err="failed to get container status \"f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027\": rpc error: code = NotFound desc = could not find container \"f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027\": container with ID starting with f1cf204749193e9a326d3a88432c6537c2bc79ab31fff4eac2d0195828fd7027 not found: ID does not exist" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.827969 4680 scope.go:117] "RemoveContainer" containerID="ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6" Jan 26 16:31:00 crc kubenswrapper[4680]: E0126 16:31:00.829142 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6\": container with ID starting with ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6 not found: ID does not exist" containerID="ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.829189 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6"} err="failed to get container status \"ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6\": rpc error: code = NotFound desc = could not find container \"ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6\": container with ID starting with ff377cb2f0e2828ce0b2855bb2eea5e64f1c340c2ff845576abdf3b5ed261ac6 not found: ID does not exist" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.869515 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.885357 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.890278 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.890562 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.890591 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t9b9\" (UniqueName: \"kubernetes.io/projected/f7ddd112-a6cf-481a-97ff-0f50ed652b23-kube-api-access-9t9b9\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.890605 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.890617 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.911414 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-config" (OuterVolumeSpecName: "config") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.913322 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.914628 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7ddd112-a6cf-481a-97ff-0f50ed652b23" (UID: "f7ddd112-a6cf-481a-97ff-0f50ed652b23"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.992556 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.992591 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:00 crc kubenswrapper[4680]: I0126 16:31:00.992605 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ddd112-a6cf-481a-97ff-0f50ed652b23-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:01 crc kubenswrapper[4680]: I0126 16:31:01.014008 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf9d897bc-b5qgp"] Jan 26 16:31:01 crc kubenswrapper[4680]: I0126 16:31:01.024766 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cf9d897bc-b5qgp"] Jan 26 16:31:01 crc kubenswrapper[4680]: I0126 16:31:01.179600 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" path="/var/lib/kubelet/pods/f7ddd112-a6cf-481a-97ff-0f50ed652b23/volumes" Jan 26 16:31:03 crc kubenswrapper[4680]: I0126 16:31:03.060941 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:31:03 crc kubenswrapper[4680]: I0126 16:31:03.061310 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:31:03 crc kubenswrapper[4680]: I0126 16:31:03.116164 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:31:03 crc kubenswrapper[4680]: I0126 16:31:03.748207 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:31:03 crc kubenswrapper[4680]: I0126 16:31:03.825082 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5sr9"] Jan 26 16:31:04 crc kubenswrapper[4680]: E0126 16:31:04.804314 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f9af68_9a32_4fc8_9c2e_552788c2ff89.slice/crio-conmon-cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:31:05 crc kubenswrapper[4680]: I0126 16:31:05.726284 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w5sr9" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="registry-server" containerID="cri-o://d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0" gracePeriod=2 Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.317736 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.397843 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56thc\" (UniqueName: \"kubernetes.io/projected/964cf7e9-1d9e-413d-92d3-8c304258505f-kube-api-access-56thc\") pod \"964cf7e9-1d9e-413d-92d3-8c304258505f\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.398025 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-catalog-content\") pod \"964cf7e9-1d9e-413d-92d3-8c304258505f\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.398092 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-utilities\") pod \"964cf7e9-1d9e-413d-92d3-8c304258505f\" (UID: \"964cf7e9-1d9e-413d-92d3-8c304258505f\") " Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.399170 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-utilities" (OuterVolumeSpecName: "utilities") pod "964cf7e9-1d9e-413d-92d3-8c304258505f" (UID: "964cf7e9-1d9e-413d-92d3-8c304258505f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.404376 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964cf7e9-1d9e-413d-92d3-8c304258505f-kube-api-access-56thc" (OuterVolumeSpecName: "kube-api-access-56thc") pod "964cf7e9-1d9e-413d-92d3-8c304258505f" (UID: "964cf7e9-1d9e-413d-92d3-8c304258505f"). InnerVolumeSpecName "kube-api-access-56thc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.485467 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "964cf7e9-1d9e-413d-92d3-8c304258505f" (UID: "964cf7e9-1d9e-413d-92d3-8c304258505f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.500020 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56thc\" (UniqueName: \"kubernetes.io/projected/964cf7e9-1d9e-413d-92d3-8c304258505f-kube-api-access-56thc\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.500062 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.500091 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964cf7e9-1d9e-413d-92d3-8c304258505f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.735216 4680 generic.go:334] "Generic (PLEG): container finished" podID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerID="d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0" exitCode=0 Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.735266 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5sr9" event={"ID":"964cf7e9-1d9e-413d-92d3-8c304258505f","Type":"ContainerDied","Data":"d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0"} Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.735304 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5sr9" event={"ID":"964cf7e9-1d9e-413d-92d3-8c304258505f","Type":"ContainerDied","Data":"cdd4ecaf02d81080f89ba40b54f100acbeff5bdab7e7691200b6f3d36fc1308a"} Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.735327 4680 scope.go:117] "RemoveContainer" containerID="d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.735569 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5sr9" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.762568 4680 scope.go:117] "RemoveContainer" containerID="f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.771403 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5sr9"] Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.783327 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w5sr9"] Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.793842 4680 scope.go:117] "RemoveContainer" containerID="870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.823821 4680 scope.go:117] "RemoveContainer" containerID="d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0" Jan 26 16:31:06 crc kubenswrapper[4680]: E0126 16:31:06.824365 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0\": container with ID starting with d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0 not found: ID does not exist" containerID="d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.824482 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0"} err="failed to get container status \"d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0\": rpc error: code = NotFound desc = could not find container \"d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0\": container with ID starting with d0e820b03ca7e2df40220450d9c22717787e70392c928afe5ef74bdc55d739c0 not found: ID does not exist" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.824562 4680 scope.go:117] "RemoveContainer" containerID="f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489" Jan 26 16:31:06 crc kubenswrapper[4680]: E0126 16:31:06.825049 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489\": container with ID starting with f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489 not found: ID does not exist" containerID="f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.825090 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489"} err="failed to get container status \"f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489\": rpc error: code = NotFound desc = could not find container \"f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489\": container with ID starting with f00a91ba6c87ac24d25d7282cb8d2adc8b521d5b3fb1e990d7095809228d8489 not found: ID does not exist" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.825112 4680 scope.go:117] "RemoveContainer" containerID="870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48" Jan 26 16:31:06 crc kubenswrapper[4680]: E0126 16:31:06.825424 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48\": container with ID starting with 870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48 not found: ID does not exist" containerID="870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48" Jan 26 16:31:06 crc kubenswrapper[4680]: I0126 16:31:06.825506 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48"} err="failed to get container status \"870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48\": rpc error: code = NotFound desc = could not find container \"870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48\": container with ID starting with 870523931f97b1c60ebf89e7eaa65c6ef4da4ebba28ff56fbb9bb511819c2e48 not found: ID does not exist" Jan 26 16:31:07 crc kubenswrapper[4680]: I0126 16:31:07.182640 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" path="/var/lib/kubelet/pods/964cf7e9-1d9e-413d-92d3-8c304258505f/volumes" Jan 26 16:31:15 crc kubenswrapper[4680]: E0126 16:31:15.036611 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f9af68_9a32_4fc8_9c2e_552788c2ff89.slice/crio-conmon-cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.824381 4680 generic.go:334] "Generic (PLEG): container finished" podID="87da0bc8-aff7-4bd8-afeb-cba14a6e906e" containerID="2fb3bc1154ccc8f9aada2e0ea91b98f3ecad47dc09ad710dfc6187f5e0ed3c64" exitCode=0 Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.824468 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87da0bc8-aff7-4bd8-afeb-cba14a6e906e","Type":"ContainerDied","Data":"2fb3bc1154ccc8f9aada2e0ea91b98f3ecad47dc09ad710dfc6187f5e0ed3c64"} Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.828529 4680 generic.go:334] "Generic (PLEG): container finished" podID="cc818c5c-f9dc-46e9-b1d0-bca79fa6a985" containerID="a01607fb22fb2fb49eac5c19be5b53484659452ec6b4881098bb08d44b4805f2" exitCode=0 Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.828701 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985","Type":"ContainerDied","Data":"a01607fb22fb2fb49eac5c19be5b53484659452ec6b4881098bb08d44b4805f2"} Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.949439 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7"] Jan 26 16:31:16 crc kubenswrapper[4680]: E0126 16:31:16.949813 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="extract-utilities" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.949829 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="extract-utilities" Jan 26 16:31:16 crc kubenswrapper[4680]: E0126 16:31:16.949859 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="registry-server" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.949867 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="registry-server" Jan 26 16:31:16 crc kubenswrapper[4680]: E0126 16:31:16.949879 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="extract-content" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.949885 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="extract-content" Jan 26 16:31:16 crc kubenswrapper[4680]: E0126 16:31:16.949896 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerName="init" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.949902 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerName="init" Jan 26 16:31:16 crc kubenswrapper[4680]: E0126 16:31:16.949912 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerName="dnsmasq-dns" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.949917 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerName="dnsmasq-dns" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.950111 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="964cf7e9-1d9e-413d-92d3-8c304258505f" containerName="registry-server" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.950127 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ddd112-a6cf-481a-97ff-0f50ed652b23" containerName="dnsmasq-dns" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.950729 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.963879 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.964125 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.964275 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:31:16 crc kubenswrapper[4680]: I0126 16:31:16.964408 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.013469 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7"] Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.063415 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swn8b\" (UniqueName: \"kubernetes.io/projected/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-kube-api-access-swn8b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.063701 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.063861 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.063895 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.169554 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swn8b\" (UniqueName: \"kubernetes.io/projected/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-kube-api-access-swn8b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.169625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.169663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.169691 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.176360 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.180536 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.189402 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.201022 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swn8b\" (UniqueName: \"kubernetes.io/projected/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-kube-api-access-swn8b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.331033 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.865292 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87da0bc8-aff7-4bd8-afeb-cba14a6e906e","Type":"ContainerStarted","Data":"0584d4170f1f6452676870fa9a0fd109ee8238e1a7f8c81c0f13ea8a43be22d2"} Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.866645 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.868209 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc818c5c-f9dc-46e9-b1d0-bca79fa6a985","Type":"ContainerStarted","Data":"154d615da45848d43b430a2bcfb5a05af2f0e93eead139324c000b6e2b3c9503"} Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.868636 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 16:31:17 crc kubenswrapper[4680]: I0126 16:31:17.908898 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.908864778 podStartE2EDuration="35.908864778s" podCreationTimestamp="2026-01-26 16:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:31:17.90188753 +0000 UTC m=+1553.063159819" watchObservedRunningTime="2026-01-26 16:31:17.908864778 +0000 UTC m=+1553.070137047" Jan 26 16:31:18 crc kubenswrapper[4680]: I0126 16:31:18.306957 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.306929602 podStartE2EDuration="37.306929602s" podCreationTimestamp="2026-01-26 16:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:31:17.939478165 +0000 UTC m=+1553.100750434" watchObservedRunningTime="2026-01-26 16:31:18.306929602 +0000 UTC m=+1553.468201881" Jan 26 16:31:18 crc kubenswrapper[4680]: I0126 16:31:18.314403 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7"] Jan 26 16:31:18 crc kubenswrapper[4680]: W0126 16:31:18.315549 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd5d1ad_8a5c_47d6_a087_274b8c14a0e2.slice/crio-46398c3014815bd23be35d514be1f258e874173605d3225a32c967372c9e543e WatchSource:0}: Error finding container 46398c3014815bd23be35d514be1f258e874173605d3225a32c967372c9e543e: Status 404 returned error can't find the container with id 46398c3014815bd23be35d514be1f258e874173605d3225a32c967372c9e543e Jan 26 16:31:18 crc kubenswrapper[4680]: I0126 16:31:18.879975 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" event={"ID":"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2","Type":"ContainerStarted","Data":"46398c3014815bd23be35d514be1f258e874173605d3225a32c967372c9e543e"} Jan 26 16:31:25 crc kubenswrapper[4680]: E0126 16:31:25.338889 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f9af68_9a32_4fc8_9c2e_552788c2ff89.slice/crio-conmon-cc98088cf7a1dc25cb4f7a3eb9e50d2f471fa631b18f9b246b238da3995bab5c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:31:32 crc kubenswrapper[4680]: I0126 16:31:32.296928 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 16:31:32 crc kubenswrapper[4680]: I0126 16:31:32.864265 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:31:33 crc kubenswrapper[4680]: I0126 16:31:33.614185 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:31:34 crc kubenswrapper[4680]: I0126 16:31:34.042492 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" event={"ID":"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2","Type":"ContainerStarted","Data":"4147949e6a579932a22ffbeb3d18873d825cb5a237b277793a71f2527493e988"} Jan 26 16:31:34 crc kubenswrapper[4680]: I0126 16:31:34.071573 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" podStartSLOduration=2.779468917 podStartE2EDuration="18.071553415s" podCreationTimestamp="2026-01-26 16:31:16 +0000 UTC" firstStartedPulling="2026-01-26 16:31:18.319480377 +0000 UTC m=+1553.480752646" lastFinishedPulling="2026-01-26 16:31:33.611564875 +0000 UTC m=+1568.772837144" observedRunningTime="2026-01-26 16:31:34.060056839 +0000 UTC m=+1569.221329098" watchObservedRunningTime="2026-01-26 16:31:34.071553415 +0000 UTC m=+1569.232825684" Jan 26 16:31:41 crc kubenswrapper[4680]: I0126 16:31:41.642861 4680 scope.go:117] "RemoveContainer" containerID="fb6d9f8b82ea74bdd046ee4d42e97faac01d44e12013a3e1eda691efb47209b1" Jan 26 16:31:45 crc kubenswrapper[4680]: E0126 16:31:45.831913 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd5d1ad_8a5c_47d6_a087_274b8c14a0e2.slice/crio-conmon-4147949e6a579932a22ffbeb3d18873d825cb5a237b277793a71f2527493e988.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd5d1ad_8a5c_47d6_a087_274b8c14a0e2.slice/crio-4147949e6a579932a22ffbeb3d18873d825cb5a237b277793a71f2527493e988.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:31:46 crc kubenswrapper[4680]: I0126 16:31:46.186868 4680 generic.go:334] "Generic (PLEG): container finished" podID="3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" containerID="4147949e6a579932a22ffbeb3d18873d825cb5a237b277793a71f2527493e988" exitCode=0 Jan 26 16:31:46 crc kubenswrapper[4680]: I0126 16:31:46.187148 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" event={"ID":"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2","Type":"ContainerDied","Data":"4147949e6a579932a22ffbeb3d18873d825cb5a237b277793a71f2527493e988"} Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.645868 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.752159 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-ssh-key-openstack-edpm-ipam\") pod \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.752246 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swn8b\" (UniqueName: \"kubernetes.io/projected/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-kube-api-access-swn8b\") pod \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.752346 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-inventory\") pod \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.752445 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-repo-setup-combined-ca-bundle\") pod \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\" (UID: \"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2\") " Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.757782 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" (UID: "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.758135 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-kube-api-access-swn8b" (OuterVolumeSpecName: "kube-api-access-swn8b") pod "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" (UID: "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2"). InnerVolumeSpecName "kube-api-access-swn8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.782230 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-inventory" (OuterVolumeSpecName: "inventory") pod "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" (UID: "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.783861 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" (UID: "3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.854133 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.854171 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swn8b\" (UniqueName: \"kubernetes.io/projected/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-kube-api-access-swn8b\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.854183 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:47 crc kubenswrapper[4680]: I0126 16:31:47.854191 4680 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.211584 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" event={"ID":"3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2","Type":"ContainerDied","Data":"46398c3014815bd23be35d514be1f258e874173605d3225a32c967372c9e543e"} Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.211972 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46398c3014815bd23be35d514be1f258e874173605d3225a32c967372c9e543e" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.211651 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c8fb7" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.305118 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp"] Jan 26 16:31:48 crc kubenswrapper[4680]: E0126 16:31:48.305612 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.305636 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.305854 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd5d1ad-8a5c-47d6-a087-274b8c14a0e2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.306963 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.309965 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.310252 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.310376 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.310499 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.319390 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp"] Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.464842 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s55s\" (UniqueName: \"kubernetes.io/projected/321b53db-6076-41fc-9da8-058c8e6804f0-kube-api-access-4s55s\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.465177 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.465301 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.567205 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s55s\" (UniqueName: \"kubernetes.io/projected/321b53db-6076-41fc-9da8-058c8e6804f0-kube-api-access-4s55s\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.567549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.567585 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.572748 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.573098 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.595694 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s55s\" (UniqueName: \"kubernetes.io/projected/321b53db-6076-41fc-9da8-058c8e6804f0-kube-api-access-4s55s\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgbnp\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:48 crc kubenswrapper[4680]: I0126 16:31:48.624280 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:49 crc kubenswrapper[4680]: I0126 16:31:49.118916 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp"] Jan 26 16:31:49 crc kubenswrapper[4680]: I0126 16:31:49.221400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" event={"ID":"321b53db-6076-41fc-9da8-058c8e6804f0","Type":"ContainerStarted","Data":"cb41de5884b70dd7fabbf555d76af6efb87e2503af63b6b6e67079a83998402a"} Jan 26 16:31:50 crc kubenswrapper[4680]: I0126 16:31:50.231482 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" event={"ID":"321b53db-6076-41fc-9da8-058c8e6804f0","Type":"ContainerStarted","Data":"158d72e54da98ef2c9ab7777417220a0f55d28c26a442b3ddc3bbd56528a176f"} Jan 26 16:31:50 crc kubenswrapper[4680]: I0126 16:31:50.263398 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" podStartSLOduration=1.834358216 podStartE2EDuration="2.263376466s" podCreationTimestamp="2026-01-26 16:31:48 +0000 UTC" firstStartedPulling="2026-01-26 16:31:49.127279272 +0000 UTC m=+1584.288551541" lastFinishedPulling="2026-01-26 16:31:49.556297522 +0000 UTC m=+1584.717569791" observedRunningTime="2026-01-26 16:31:50.251431617 +0000 UTC m=+1585.412703886" watchObservedRunningTime="2026-01-26 16:31:50.263376466 +0000 UTC m=+1585.424648735" Jan 26 16:31:53 crc kubenswrapper[4680]: I0126 16:31:53.261323 4680 generic.go:334] "Generic (PLEG): container finished" podID="321b53db-6076-41fc-9da8-058c8e6804f0" containerID="158d72e54da98ef2c9ab7777417220a0f55d28c26a442b3ddc3bbd56528a176f" exitCode=0 Jan 26 16:31:53 crc kubenswrapper[4680]: I0126 16:31:53.261419 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" event={"ID":"321b53db-6076-41fc-9da8-058c8e6804f0","Type":"ContainerDied","Data":"158d72e54da98ef2c9ab7777417220a0f55d28c26a442b3ddc3bbd56528a176f"} Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.660846 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.785453 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s55s\" (UniqueName: \"kubernetes.io/projected/321b53db-6076-41fc-9da8-058c8e6804f0-kube-api-access-4s55s\") pod \"321b53db-6076-41fc-9da8-058c8e6804f0\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.786305 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-ssh-key-openstack-edpm-ipam\") pod \"321b53db-6076-41fc-9da8-058c8e6804f0\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.786414 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-inventory\") pod \"321b53db-6076-41fc-9da8-058c8e6804f0\" (UID: \"321b53db-6076-41fc-9da8-058c8e6804f0\") " Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.792496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321b53db-6076-41fc-9da8-058c8e6804f0-kube-api-access-4s55s" (OuterVolumeSpecName: "kube-api-access-4s55s") pod "321b53db-6076-41fc-9da8-058c8e6804f0" (UID: "321b53db-6076-41fc-9da8-058c8e6804f0"). InnerVolumeSpecName "kube-api-access-4s55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.820630 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-inventory" (OuterVolumeSpecName: "inventory") pod "321b53db-6076-41fc-9da8-058c8e6804f0" (UID: "321b53db-6076-41fc-9da8-058c8e6804f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.824496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "321b53db-6076-41fc-9da8-058c8e6804f0" (UID: "321b53db-6076-41fc-9da8-058c8e6804f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.888972 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.889156 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321b53db-6076-41fc-9da8-058c8e6804f0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:54 crc kubenswrapper[4680]: I0126 16:31:54.889229 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s55s\" (UniqueName: \"kubernetes.io/projected/321b53db-6076-41fc-9da8-058c8e6804f0-kube-api-access-4s55s\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.288464 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.288447 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgbnp" event={"ID":"321b53db-6076-41fc-9da8-058c8e6804f0","Type":"ContainerDied","Data":"cb41de5884b70dd7fabbf555d76af6efb87e2503af63b6b6e67079a83998402a"} Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.289153 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb41de5884b70dd7fabbf555d76af6efb87e2503af63b6b6e67079a83998402a" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.368264 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv"] Jan 26 16:31:55 crc kubenswrapper[4680]: E0126 16:31:55.368757 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321b53db-6076-41fc-9da8-058c8e6804f0" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.368783 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="321b53db-6076-41fc-9da8-058c8e6804f0" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.369111 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="321b53db-6076-41fc-9da8-058c8e6804f0" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.369954 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.374673 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.375009 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.375280 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.375561 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.388845 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv"] Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.503165 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.503312 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n297\" (UniqueName: \"kubernetes.io/projected/19ba741e-77c6-4639-b1ae-24261fdab545-kube-api-access-5n297\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.503459 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.503741 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.605936 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.606051 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.606156 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n297\" (UniqueName: \"kubernetes.io/projected/19ba741e-77c6-4639-b1ae-24261fdab545-kube-api-access-5n297\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.606201 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.617226 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.617670 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.620055 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.623671 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n297\" (UniqueName: \"kubernetes.io/projected/19ba741e-77c6-4639-b1ae-24261fdab545-kube-api-access-5n297\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:55 crc kubenswrapper[4680]: I0126 16:31:55.709115 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:31:56 crc kubenswrapper[4680]: I0126 16:31:56.226959 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv"] Jan 26 16:31:56 crc kubenswrapper[4680]: I0126 16:31:56.298409 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" event={"ID":"19ba741e-77c6-4639-b1ae-24261fdab545","Type":"ContainerStarted","Data":"3bd1d8b1c8d4297282e2c066058ba51d61dbf383a92576655c9ef0724faa73b0"} Jan 26 16:31:57 crc kubenswrapper[4680]: I0126 16:31:57.308418 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" event={"ID":"19ba741e-77c6-4639-b1ae-24261fdab545","Type":"ContainerStarted","Data":"5d8b139706ae5a7b420e4a986e4b7e02c42c3cba8b5f116656fda922a4c92a26"} Jan 26 16:31:57 crc kubenswrapper[4680]: I0126 16:31:57.331802 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" podStartSLOduration=1.5701477449999999 podStartE2EDuration="2.331777445s" podCreationTimestamp="2026-01-26 16:31:55 +0000 UTC" firstStartedPulling="2026-01-26 16:31:56.254969161 +0000 UTC m=+1591.416241430" lastFinishedPulling="2026-01-26 16:31:57.016598861 +0000 UTC m=+1592.177871130" observedRunningTime="2026-01-26 16:31:57.321207136 +0000 UTC m=+1592.482479445" watchObservedRunningTime="2026-01-26 16:31:57.331777445 +0000 UTC m=+1592.493049724" Jan 26 16:32:16 crc kubenswrapper[4680]: I0126 16:32:16.980519 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:32:16 crc kubenswrapper[4680]: I0126 16:32:16.982176 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:32:41 crc kubenswrapper[4680]: I0126 16:32:41.809004 4680 scope.go:117] "RemoveContainer" containerID="c5682cc71bb3ce08b11f8c074717c4608c6f3dce8a51e71b04077d53406a00da" Jan 26 16:32:41 crc kubenswrapper[4680]: I0126 16:32:41.857316 4680 scope.go:117] "RemoveContainer" containerID="9cbf522c3f43bb45ee4706f3d28c25f0227a053cf9defc02abfdf942c88cf230" Jan 26 16:32:46 crc kubenswrapper[4680]: I0126 16:32:46.980863 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:32:46 crc kubenswrapper[4680]: I0126 16:32:46.981300 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:33:16 crc kubenswrapper[4680]: I0126 16:33:16.980469 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:33:16 crc kubenswrapper[4680]: I0126 16:33:16.981258 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:33:16 crc kubenswrapper[4680]: I0126 16:33:16.981315 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:33:16 crc kubenswrapper[4680]: I0126 16:33:16.982215 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:33:16 crc kubenswrapper[4680]: I0126 16:33:16.982276 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" gracePeriod=600 Jan 26 16:33:17 crc kubenswrapper[4680]: E0126 16:33:17.118026 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:33:17 crc kubenswrapper[4680]: I0126 16:33:17.978058 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" exitCode=0 Jan 26 16:33:17 crc kubenswrapper[4680]: I0126 16:33:17.978302 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7"} Jan 26 16:33:17 crc kubenswrapper[4680]: I0126 16:33:17.978896 4680 scope.go:117] "RemoveContainer" containerID="b4fcfc1b4abf63ee958fe902223a0c398b190bd8c8128fbc0a7b39068c18c50a" Jan 26 16:33:17 crc kubenswrapper[4680]: I0126 16:33:17.980785 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:33:17 crc kubenswrapper[4680]: E0126 16:33:17.981434 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:33:30 crc kubenswrapper[4680]: I0126 16:33:30.169295 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:33:30 crc kubenswrapper[4680]: E0126 16:33:30.170012 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:33:41 crc kubenswrapper[4680]: I0126 16:33:41.927771 4680 scope.go:117] "RemoveContainer" containerID="0287777a9d10f356c3462ccefb3da05382adf5dc1a30152bfa2461d416b9ecbd" Jan 26 16:33:41 crc kubenswrapper[4680]: I0126 16:33:41.953269 4680 scope.go:117] "RemoveContainer" containerID="34175ba917b6168056e510cbc50274a20a03f021e5ece50f8d837ad3d7007b92" Jan 26 16:33:41 crc kubenswrapper[4680]: I0126 16:33:41.984477 4680 scope.go:117] "RemoveContainer" containerID="1991794023b8e95fc597b15d7c66dfc0a2d45cfc0e488e096db7bf4c78e04fa0" Jan 26 16:33:42 crc kubenswrapper[4680]: I0126 16:33:42.026002 4680 scope.go:117] "RemoveContainer" containerID="c0271566dff295f48ae35768f81fd04ee286264ae4200c88a7fcf87767b505cb" Jan 26 16:33:42 crc kubenswrapper[4680]: I0126 16:33:42.042549 4680 scope.go:117] "RemoveContainer" containerID="2d59a5e4bbe56fe9763049fcbefe12a707dc9f87c7ebfdbbaba4ff75bf628156" Jan 26 16:33:44 crc kubenswrapper[4680]: I0126 16:33:44.170584 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:33:44 crc kubenswrapper[4680]: E0126 16:33:44.171948 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:33:56 crc kubenswrapper[4680]: I0126 16:33:56.169835 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:33:56 crc kubenswrapper[4680]: E0126 16:33:56.171144 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:33:58 crc kubenswrapper[4680]: I0126 16:33:58.045472 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7dmt5"] Jan 26 16:33:58 crc kubenswrapper[4680]: I0126 16:33:58.053646 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7dmt5"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.041308 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s2hgk"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.052001 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7605-account-create-update-llm8x"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.061162 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-73bf-account-create-update-clmpj"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.068665 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fa50-account-create-update-vxdxg"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.078649 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s2hgk"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.089786 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7605-account-create-update-llm8x"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.097741 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-525dd"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.105802 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fa50-account-create-update-vxdxg"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.115231 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-73bf-account-create-update-clmpj"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.132014 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-525dd"] Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.180254 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fdcb5d9-3066-4592-a0db-290c55aa87d6" path="/var/lib/kubelet/pods/4fdcb5d9-3066-4592-a0db-290c55aa87d6/volumes" Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.181386 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6087753a-56bf-4286-9ec8-fe1ce34d08f7" path="/var/lib/kubelet/pods/6087753a-56bf-4286-9ec8-fe1ce34d08f7/volumes" Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.182592 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f54278-5c8d-45e7-9d36-127fff79e22a" path="/var/lib/kubelet/pods/71f54278-5c8d-45e7-9d36-127fff79e22a/volumes" Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.183851 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8cc63a-ae3e-494d-b906-9c2d31441be3" path="/var/lib/kubelet/pods/7a8cc63a-ae3e-494d-b906-9c2d31441be3/volumes" Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.185557 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e092ed22-18ed-48b1-9d0d-43b93d8a60c6" path="/var/lib/kubelet/pods/e092ed22-18ed-48b1-9d0d-43b93d8a60c6/volumes" Jan 26 16:33:59 crc kubenswrapper[4680]: I0126 16:33:59.186237 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fdc0f7-213f-408f-9ae7-590b8e900e28" path="/var/lib/kubelet/pods/e8fdc0f7-213f-408f-9ae7-590b8e900e28/volumes" Jan 26 16:34:11 crc kubenswrapper[4680]: I0126 16:34:11.169823 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:34:11 crc kubenswrapper[4680]: E0126 16:34:11.170731 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:34:25 crc kubenswrapper[4680]: I0126 16:34:25.176229 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:34:25 crc kubenswrapper[4680]: E0126 16:34:25.177004 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:34:26 crc kubenswrapper[4680]: I0126 16:34:26.036680 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-frv9r"] Jan 26 16:34:26 crc kubenswrapper[4680]: I0126 16:34:26.045454 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-frv9r"] Jan 26 16:34:27 crc kubenswrapper[4680]: I0126 16:34:27.179120 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753e1d64-a470-4d8b-b715-8cc305a976af" path="/var/lib/kubelet/pods/753e1d64-a470-4d8b-b715-8cc305a976af/volumes" Jan 26 16:34:37 crc kubenswrapper[4680]: I0126 16:34:37.170522 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:34:37 crc kubenswrapper[4680]: E0126 16:34:37.171242 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.100724 4680 scope.go:117] "RemoveContainer" containerID="ce6d140f225f19194d0ced973fe88442985e00eb499ecc01139b47045112687d" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.133455 4680 scope.go:117] "RemoveContainer" containerID="b78535345c455f21ddb5be5ae69caff7f6ab249ef73ec9a19f4253430492adf8" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.183917 4680 scope.go:117] "RemoveContainer" containerID="1868f71e7bb0d2d4e14e86dfb4a1c0b739515cd1b5bb543e10b49138567d1d03" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.237622 4680 scope.go:117] "RemoveContainer" containerID="3a80b32bc81b3baef4649ff626c2687ffc152f0a4ffb1f439e902f5a68d369af" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.277548 4680 scope.go:117] "RemoveContainer" containerID="6f95f12d05f6e32e2eb66958edb1e8659a970014cb16e0b2e9c4112f51c0d068" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.318467 4680 scope.go:117] "RemoveContainer" containerID="2a2ccc8c9a928fbd6bb3d9820de33b0a71d11c9cb6683d0f6796750e392f4bc9" Jan 26 16:34:42 crc kubenswrapper[4680]: I0126 16:34:42.365485 4680 scope.go:117] "RemoveContainer" containerID="f6ad77995abfd86e5669048e685315a648217bd21797c45bb1f28f7ce1eed326" Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.042678 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-l5d8g"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.058302 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-l5d8g"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.066330 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3562-account-create-update-hnrxz"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.079000 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8f25-account-create-update-mpnp7"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.087572 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-87v67"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.095558 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-fbd3-account-create-update-bfhvf"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.104210 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8f25-account-create-update-mpnp7"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.112684 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-87v67"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.120679 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-fbd3-account-create-update-bfhvf"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.128467 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3562-account-create-update-hnrxz"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.135995 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c904-account-create-update-gk6cn"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.143622 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c904-account-create-update-gk6cn"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.150974 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7qgpz"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.160473 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lmj6r"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.166191 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lmj6r"] Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.170483 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:34:48 crc kubenswrapper[4680]: E0126 16:34:48.170686 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:34:48 crc kubenswrapper[4680]: I0126 16:34:48.173408 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7qgpz"] Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.179288 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201dfb63-7a3b-49b5-a200-e2c9a042e9d0" path="/var/lib/kubelet/pods/201dfb63-7a3b-49b5-a200-e2c9a042e9d0/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.180159 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30684708-573a-4266-bc46-77aea415e091" path="/var/lib/kubelet/pods/30684708-573a-4266-bc46-77aea415e091/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.180907 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c72e11-8924-4e36-b6f1-6023bea30f11" path="/var/lib/kubelet/pods/33c72e11-8924-4e36-b6f1-6023bea30f11/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.181580 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65255cbe-9e75-495f-adc1-048491bf7460" path="/var/lib/kubelet/pods/65255cbe-9e75-495f-adc1-048491bf7460/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.182973 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892527eb-f1e6-437d-85a3-2631386f0d55" path="/var/lib/kubelet/pods/892527eb-f1e6-437d-85a3-2631386f0d55/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.183613 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e20470f-8b23-4f64-8dcb-91eecfedf6be" path="/var/lib/kubelet/pods/9e20470f-8b23-4f64-8dcb-91eecfedf6be/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.184308 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63117d0-89f2-4245-9c6b-74052d3d0ef6" path="/var/lib/kubelet/pods/c63117d0-89f2-4245-9c6b-74052d3d0ef6/volumes" Jan 26 16:34:49 crc kubenswrapper[4680]: I0126 16:34:49.185408 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a" path="/var/lib/kubelet/pods/d7d9e2e6-45fa-4255-bd5f-017fa7aacc1a/volumes" Jan 26 16:34:50 crc kubenswrapper[4680]: I0126 16:34:50.026144 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xtsg9"] Jan 26 16:34:50 crc kubenswrapper[4680]: I0126 16:34:50.033864 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xtsg9"] Jan 26 16:34:51 crc kubenswrapper[4680]: I0126 16:34:51.180859 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97d5f1e-6cd5-4ec0-a10d-203a5c896353" path="/var/lib/kubelet/pods/a97d5f1e-6cd5-4ec0-a10d-203a5c896353/volumes" Jan 26 16:34:54 crc kubenswrapper[4680]: I0126 16:34:54.028775 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jmjhq"] Jan 26 16:34:54 crc kubenswrapper[4680]: I0126 16:34:54.041284 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jmjhq"] Jan 26 16:34:55 crc kubenswrapper[4680]: I0126 16:34:55.179632 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0372bc84-8186-4815-8177-8829bed3556f" path="/var/lib/kubelet/pods/0372bc84-8186-4815-8177-8829bed3556f/volumes" Jan 26 16:35:02 crc kubenswrapper[4680]: I0126 16:35:02.170871 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:35:02 crc kubenswrapper[4680]: E0126 16:35:02.172444 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:35:16 crc kubenswrapper[4680]: I0126 16:35:16.169958 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:35:16 crc kubenswrapper[4680]: E0126 16:35:16.170785 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:35:30 crc kubenswrapper[4680]: I0126 16:35:30.170327 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:35:30 crc kubenswrapper[4680]: E0126 16:35:30.171157 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:35:32 crc kubenswrapper[4680]: I0126 16:35:32.113139 4680 generic.go:334] "Generic (PLEG): container finished" podID="19ba741e-77c6-4639-b1ae-24261fdab545" containerID="5d8b139706ae5a7b420e4a986e4b7e02c42c3cba8b5f116656fda922a4c92a26" exitCode=0 Jan 26 16:35:32 crc kubenswrapper[4680]: I0126 16:35:32.113229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" event={"ID":"19ba741e-77c6-4639-b1ae-24261fdab545","Type":"ContainerDied","Data":"5d8b139706ae5a7b420e4a986e4b7e02c42c3cba8b5f116656fda922a4c92a26"} Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.579172 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.701802 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-inventory\") pod \"19ba741e-77c6-4639-b1ae-24261fdab545\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.701855 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-ssh-key-openstack-edpm-ipam\") pod \"19ba741e-77c6-4639-b1ae-24261fdab545\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.701879 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n297\" (UniqueName: \"kubernetes.io/projected/19ba741e-77c6-4639-b1ae-24261fdab545-kube-api-access-5n297\") pod \"19ba741e-77c6-4639-b1ae-24261fdab545\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.701961 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-bootstrap-combined-ca-bundle\") pod \"19ba741e-77c6-4639-b1ae-24261fdab545\" (UID: \"19ba741e-77c6-4639-b1ae-24261fdab545\") " Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.709059 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "19ba741e-77c6-4639-b1ae-24261fdab545" (UID: "19ba741e-77c6-4639-b1ae-24261fdab545"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.716379 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ba741e-77c6-4639-b1ae-24261fdab545-kube-api-access-5n297" (OuterVolumeSpecName: "kube-api-access-5n297") pod "19ba741e-77c6-4639-b1ae-24261fdab545" (UID: "19ba741e-77c6-4639-b1ae-24261fdab545"). InnerVolumeSpecName "kube-api-access-5n297". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.730484 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "19ba741e-77c6-4639-b1ae-24261fdab545" (UID: "19ba741e-77c6-4639-b1ae-24261fdab545"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.730602 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-inventory" (OuterVolumeSpecName: "inventory") pod "19ba741e-77c6-4639-b1ae-24261fdab545" (UID: "19ba741e-77c6-4639-b1ae-24261fdab545"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.804039 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.804164 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.804184 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n297\" (UniqueName: \"kubernetes.io/projected/19ba741e-77c6-4639-b1ae-24261fdab545-kube-api-access-5n297\") on node \"crc\" DevicePath \"\"" Jan 26 16:35:33 crc kubenswrapper[4680]: I0126 16:35:33.804196 4680 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba741e-77c6-4639-b1ae-24261fdab545-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.135157 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" event={"ID":"19ba741e-77c6-4639-b1ae-24261fdab545","Type":"ContainerDied","Data":"3bd1d8b1c8d4297282e2c066058ba51d61dbf383a92576655c9ef0724faa73b0"} Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.135204 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6qndv" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.135211 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bd1d8b1c8d4297282e2c066058ba51d61dbf383a92576655c9ef0724faa73b0" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.255987 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh"] Jan 26 16:35:34 crc kubenswrapper[4680]: E0126 16:35:34.256858 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ba741e-77c6-4639-b1ae-24261fdab545" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.256924 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ba741e-77c6-4639-b1ae-24261fdab545" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.257270 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ba741e-77c6-4639-b1ae-24261fdab545" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.258047 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.260001 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.268293 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.268394 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.268477 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.272645 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh"] Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.318309 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.318434 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4vs\" (UniqueName: \"kubernetes.io/projected/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-kube-api-access-9v4vs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.318507 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.419794 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4vs\" (UniqueName: \"kubernetes.io/projected/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-kube-api-access-9v4vs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.419891 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.419988 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.425545 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.433993 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.436850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4vs\" (UniqueName: \"kubernetes.io/projected/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-kube-api-access-9v4vs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.578638 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.923414 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh"] Jan 26 16:35:34 crc kubenswrapper[4680]: I0126 16:35:34.929799 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:35:35 crc kubenswrapper[4680]: I0126 16:35:35.144787 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" event={"ID":"8af93eb0-99ad-41d5-a0ee-bdb45ae17790","Type":"ContainerStarted","Data":"a9feacda8a2af446bb0a399dec6e7573713a188df51bf3ab50a8abcbfef19e29"} Jan 26 16:35:36 crc kubenswrapper[4680]: I0126 16:35:36.153132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" event={"ID":"8af93eb0-99ad-41d5-a0ee-bdb45ae17790","Type":"ContainerStarted","Data":"0f6f079fda3a4d2d803eb523c022a2b0f0fabcef8f9236cc0805d5c77067a733"} Jan 26 16:35:36 crc kubenswrapper[4680]: I0126 16:35:36.175292 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" podStartSLOduration=1.7317881910000001 podStartE2EDuration="2.175267753s" podCreationTimestamp="2026-01-26 16:35:34 +0000 UTC" firstStartedPulling="2026-01-26 16:35:34.929103722 +0000 UTC m=+1810.090375991" lastFinishedPulling="2026-01-26 16:35:35.372583284 +0000 UTC m=+1810.533855553" observedRunningTime="2026-01-26 16:35:36.169109058 +0000 UTC m=+1811.330381327" watchObservedRunningTime="2026-01-26 16:35:36.175267753 +0000 UTC m=+1811.336540022" Jan 26 16:35:40 crc kubenswrapper[4680]: I0126 16:35:40.040454 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-84jft"] Jan 26 16:35:40 crc kubenswrapper[4680]: I0126 16:35:40.051444 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-84jft"] Jan 26 16:35:41 crc kubenswrapper[4680]: I0126 16:35:41.182533 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd801f9-47d9-4d25-8809-c923b39525bf" path="/var/lib/kubelet/pods/bbd801f9-47d9-4d25-8809-c923b39525bf/volumes" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.493527 4680 scope.go:117] "RemoveContainer" containerID="7eb72cb3be0afd757653526f3486e410a28b4a51e422004a5683c48ba939c43c" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.531657 4680 scope.go:117] "RemoveContainer" containerID="7645418050bec1ec8ca84fa251ec29f0c38b0bc7c26ff54ce6124adc6adbb64a" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.563217 4680 scope.go:117] "RemoveContainer" containerID="d99b813d199e891b01e73232a4ada37d92016357cd47ca60c98e49cb5f5888a6" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.611466 4680 scope.go:117] "RemoveContainer" containerID="21977e0e0b9e6efef08d5ba2066249fa68d73b684ce4d26f8ad529d8d67e6d94" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.655324 4680 scope.go:117] "RemoveContainer" containerID="1eafed7086d3ff85d779b0bed3ac802a828411109310bf20d483f0b28bb00365" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.716292 4680 scope.go:117] "RemoveContainer" containerID="474d8b5d5ee498f2f734fbf00c18e5a83fb983d84edbebe5c426826bd6e1275b" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.750496 4680 scope.go:117] "RemoveContainer" containerID="36d4563c7a8e7cdd00fab39b4f76494d04eee8a7e08bce62c5a2ebdea44bcd4c" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.767408 4680 scope.go:117] "RemoveContainer" containerID="bf9ee2c3d33d8c048957094ac58e2d9dd30a650e64921165d973d99d59d8d027" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.785424 4680 scope.go:117] "RemoveContainer" containerID="57e4e5ad58029fa452680c4afefa7d4cc51860d877518a5eea4ea9b38262dda2" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.804296 4680 scope.go:117] "RemoveContainer" containerID="b4389a7886647e0d6d6f6793aeeb90477fd5cf284af7a262ef10cc0f995bf506" Jan 26 16:35:42 crc kubenswrapper[4680]: I0126 16:35:42.830797 4680 scope.go:117] "RemoveContainer" containerID="1263023550be3c449bdc297685577c4ce0eb9a8266ff9c4ff58b2cf537edf70e" Jan 26 16:35:43 crc kubenswrapper[4680]: I0126 16:35:43.169770 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:35:43 crc kubenswrapper[4680]: E0126 16:35:43.170567 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:35:47 crc kubenswrapper[4680]: I0126 16:35:47.030046 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kjtk7"] Jan 26 16:35:47 crc kubenswrapper[4680]: I0126 16:35:47.039308 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kjtk7"] Jan 26 16:35:47 crc kubenswrapper[4680]: I0126 16:35:47.179989 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9fd2fb-6b04-4b4b-813b-b7378b617bbf" path="/var/lib/kubelet/pods/ab9fd2fb-6b04-4b4b-813b-b7378b617bbf/volumes" Jan 26 16:35:55 crc kubenswrapper[4680]: I0126 16:35:55.027563 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-28dpl"] Jan 26 16:35:55 crc kubenswrapper[4680]: I0126 16:35:55.036152 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-28dpl"] Jan 26 16:35:55 crc kubenswrapper[4680]: I0126 16:35:55.181615 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83cffb41-1848-473a-9023-204663891964" path="/var/lib/kubelet/pods/83cffb41-1848-473a-9023-204663891964/volumes" Jan 26 16:35:58 crc kubenswrapper[4680]: I0126 16:35:58.170384 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:35:58 crc kubenswrapper[4680]: E0126 16:35:58.171121 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:36:10 crc kubenswrapper[4680]: I0126 16:36:10.067941 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h9tvh"] Jan 26 16:36:10 crc kubenswrapper[4680]: I0126 16:36:10.076644 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zpnh8"] Jan 26 16:36:10 crc kubenswrapper[4680]: I0126 16:36:10.083879 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8b6qn"] Jan 26 16:36:10 crc kubenswrapper[4680]: I0126 16:36:10.092132 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zpnh8"] Jan 26 16:36:10 crc kubenswrapper[4680]: I0126 16:36:10.099820 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h9tvh"] Jan 26 16:36:10 crc kubenswrapper[4680]: I0126 16:36:10.106838 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8b6qn"] Jan 26 16:36:11 crc kubenswrapper[4680]: I0126 16:36:11.181890 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59df103d-c023-42a1-8e2c-f262d023d232" path="/var/lib/kubelet/pods/59df103d-c023-42a1-8e2c-f262d023d232/volumes" Jan 26 16:36:11 crc kubenswrapper[4680]: I0126 16:36:11.183414 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b53f4c-8c15-4f81-b110-3f81b1bd7a5c" path="/var/lib/kubelet/pods/71b53f4c-8c15-4f81-b110-3f81b1bd7a5c/volumes" Jan 26 16:36:11 crc kubenswrapper[4680]: I0126 16:36:11.184432 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78a7e79-9fe8-46b7-a137-2be924f24935" path="/var/lib/kubelet/pods/a78a7e79-9fe8-46b7-a137-2be924f24935/volumes" Jan 26 16:36:13 crc kubenswrapper[4680]: I0126 16:36:13.170012 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:36:13 crc kubenswrapper[4680]: E0126 16:36:13.170694 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:36:25 crc kubenswrapper[4680]: I0126 16:36:25.175765 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:36:25 crc kubenswrapper[4680]: E0126 16:36:25.176558 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:36:36 crc kubenswrapper[4680]: I0126 16:36:36.170046 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:36:36 crc kubenswrapper[4680]: E0126 16:36:36.170738 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:36:43 crc kubenswrapper[4680]: I0126 16:36:43.042159 4680 scope.go:117] "RemoveContainer" containerID="014b318a143d7888b9d21332c0dddbf11362a51b6a731fec6e7f9a0eb1040350" Jan 26 16:36:43 crc kubenswrapper[4680]: I0126 16:36:43.068403 4680 scope.go:117] "RemoveContainer" containerID="51edf639f8cce8a9ff45a4212c79bf986f3a6d9c52e4b273c029d958072ac80f" Jan 26 16:36:43 crc kubenswrapper[4680]: I0126 16:36:43.121664 4680 scope.go:117] "RemoveContainer" containerID="6e348aa6b9c38d2662a7843f1018b3ef6a29d55fd9c23f5bd317f1bf7472edc8" Jan 26 16:36:43 crc kubenswrapper[4680]: I0126 16:36:43.177628 4680 scope.go:117] "RemoveContainer" containerID="2ef62766e4859c29f36428511229e5e71147cdde356b816a061e148eea62b8df" Jan 26 16:36:43 crc kubenswrapper[4680]: I0126 16:36:43.277250 4680 scope.go:117] "RemoveContainer" containerID="3cee9884da32ee85d09929d42240193f8db967149b332005cdb387077dc45c5f" Jan 26 16:36:49 crc kubenswrapper[4680]: I0126 16:36:49.169432 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:36:49 crc kubenswrapper[4680]: E0126 16:36:49.170415 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:37:02 crc kubenswrapper[4680]: I0126 16:37:02.170337 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:37:02 crc kubenswrapper[4680]: E0126 16:37:02.171129 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:37:15 crc kubenswrapper[4680]: I0126 16:37:15.177409 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:37:15 crc kubenswrapper[4680]: E0126 16:37:15.178356 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:37:26 crc kubenswrapper[4680]: I0126 16:37:26.052132 4680 generic.go:334] "Generic (PLEG): container finished" podID="8af93eb0-99ad-41d5-a0ee-bdb45ae17790" containerID="0f6f079fda3a4d2d803eb523c022a2b0f0fabcef8f9236cc0805d5c77067a733" exitCode=0 Jan 26 16:37:26 crc kubenswrapper[4680]: I0126 16:37:26.052678 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" event={"ID":"8af93eb0-99ad-41d5-a0ee-bdb45ae17790","Type":"ContainerDied","Data":"0f6f079fda3a4d2d803eb523c022a2b0f0fabcef8f9236cc0805d5c77067a733"} Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.567306 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.733703 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v4vs\" (UniqueName: \"kubernetes.io/projected/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-kube-api-access-9v4vs\") pod \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.733911 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-ssh-key-openstack-edpm-ipam\") pod \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.733949 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-inventory\") pod \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\" (UID: \"8af93eb0-99ad-41d5-a0ee-bdb45ae17790\") " Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.738953 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-kube-api-access-9v4vs" (OuterVolumeSpecName: "kube-api-access-9v4vs") pod "8af93eb0-99ad-41d5-a0ee-bdb45ae17790" (UID: "8af93eb0-99ad-41d5-a0ee-bdb45ae17790"). InnerVolumeSpecName "kube-api-access-9v4vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.765398 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8af93eb0-99ad-41d5-a0ee-bdb45ae17790" (UID: "8af93eb0-99ad-41d5-a0ee-bdb45ae17790"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.766571 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-inventory" (OuterVolumeSpecName: "inventory") pod "8af93eb0-99ad-41d5-a0ee-bdb45ae17790" (UID: "8af93eb0-99ad-41d5-a0ee-bdb45ae17790"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.836575 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v4vs\" (UniqueName: \"kubernetes.io/projected/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-kube-api-access-9v4vs\") on node \"crc\" DevicePath \"\"" Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.836780 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:37:27 crc kubenswrapper[4680]: I0126 16:37:27.836877 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af93eb0-99ad-41d5-a0ee-bdb45ae17790-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.070637 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" event={"ID":"8af93eb0-99ad-41d5-a0ee-bdb45ae17790","Type":"ContainerDied","Data":"a9feacda8a2af446bb0a399dec6e7573713a188df51bf3ab50a8abcbfef19e29"} Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.070968 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9feacda8a2af446bb0a399dec6e7573713a188df51bf3ab50a8abcbfef19e29" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.070908 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gvbzh" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.204470 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr"] Jan 26 16:37:28 crc kubenswrapper[4680]: E0126 16:37:28.205123 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af93eb0-99ad-41d5-a0ee-bdb45ae17790" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.205220 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af93eb0-99ad-41d5-a0ee-bdb45ae17790" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.205538 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af93eb0-99ad-41d5-a0ee-bdb45ae17790" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.206407 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.210999 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.211297 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.211553 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.213024 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.214589 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr"] Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.352263 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.352341 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.352433 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlm7\" (UniqueName: \"kubernetes.io/projected/904fe048-4e1b-406e-910f-3fcd8f6e3842-kube-api-access-dmlm7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.454401 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.454457 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.454523 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmlm7\" (UniqueName: \"kubernetes.io/projected/904fe048-4e1b-406e-910f-3fcd8f6e3842-kube-api-access-dmlm7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.458652 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.459531 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.487632 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmlm7\" (UniqueName: \"kubernetes.io/projected/904fe048-4e1b-406e-910f-3fcd8f6e3842-kube-api-access-dmlm7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:28 crc kubenswrapper[4680]: I0126 16:37:28.521745 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:37:29 crc kubenswrapper[4680]: I0126 16:37:29.035528 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr"] Jan 26 16:37:29 crc kubenswrapper[4680]: W0126 16:37:29.040700 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod904fe048_4e1b_406e_910f_3fcd8f6e3842.slice/crio-74acaba4e20258adc33f426af1e46f960c029c5c9f82d1d6a777f5263c3ca99a WatchSource:0}: Error finding container 74acaba4e20258adc33f426af1e46f960c029c5c9f82d1d6a777f5263c3ca99a: Status 404 returned error can't find the container with id 74acaba4e20258adc33f426af1e46f960c029c5c9f82d1d6a777f5263c3ca99a Jan 26 16:37:29 crc kubenswrapper[4680]: I0126 16:37:29.079909 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" event={"ID":"904fe048-4e1b-406e-910f-3fcd8f6e3842","Type":"ContainerStarted","Data":"74acaba4e20258adc33f426af1e46f960c029c5c9f82d1d6a777f5263c3ca99a"} Jan 26 16:37:30 crc kubenswrapper[4680]: I0126 16:37:30.087514 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" event={"ID":"904fe048-4e1b-406e-910f-3fcd8f6e3842","Type":"ContainerStarted","Data":"c87547b07970f34d8054c476317b18e98c4f95e65d8503272562e249a5facfa7"} Jan 26 16:37:30 crc kubenswrapper[4680]: I0126 16:37:30.121808 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" podStartSLOduration=1.61620163 podStartE2EDuration="2.121790246s" podCreationTimestamp="2026-01-26 16:37:28 +0000 UTC" firstStartedPulling="2026-01-26 16:37:29.044761452 +0000 UTC m=+1924.206033711" lastFinishedPulling="2026-01-26 16:37:29.550350058 +0000 UTC m=+1924.711622327" observedRunningTime="2026-01-26 16:37:30.110813314 +0000 UTC m=+1925.272085593" watchObservedRunningTime="2026-01-26 16:37:30.121790246 +0000 UTC m=+1925.283062515" Jan 26 16:37:30 crc kubenswrapper[4680]: I0126 16:37:30.169828 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:37:30 crc kubenswrapper[4680]: E0126 16:37:30.170518 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:37:45 crc kubenswrapper[4680]: I0126 16:37:45.175507 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:37:45 crc kubenswrapper[4680]: E0126 16:37:45.177325 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.049140 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-shljw"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.061112 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8mgm4"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.070607 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-nrvwr"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.079554 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-66ac-account-create-update-8f7pw"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.090100 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1697-account-create-update-wrqjs"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.097912 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ddef-account-create-update-dhsbp"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.107153 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-shljw"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.115211 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1697-account-create-update-wrqjs"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.128296 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-nrvwr"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.136771 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ddef-account-create-update-dhsbp"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.146695 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8mgm4"] Jan 26 16:37:56 crc kubenswrapper[4680]: I0126 16:37:56.154992 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-66ac-account-create-update-8f7pw"] Jan 26 16:37:57 crc kubenswrapper[4680]: I0126 16:37:57.181038 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c8fbd5-da3a-471b-8c0d-64f580b6c89e" path="/var/lib/kubelet/pods/57c8fbd5-da3a-471b-8c0d-64f580b6c89e/volumes" Jan 26 16:37:57 crc kubenswrapper[4680]: I0126 16:37:57.182791 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="769e76e0-e28f-49df-a6ea-d786696f02ff" path="/var/lib/kubelet/pods/769e76e0-e28f-49df-a6ea-d786696f02ff/volumes" Jan 26 16:37:57 crc kubenswrapper[4680]: I0126 16:37:57.183676 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9904eca9-5cc3-4395-b834-f1eb89abdc95" path="/var/lib/kubelet/pods/9904eca9-5cc3-4395-b834-f1eb89abdc95/volumes" Jan 26 16:37:57 crc kubenswrapper[4680]: I0126 16:37:57.184507 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d598b28f-3dbb-402d-a506-c4a4e19557b2" path="/var/lib/kubelet/pods/d598b28f-3dbb-402d-a506-c4a4e19557b2/volumes" Jan 26 16:37:57 crc kubenswrapper[4680]: I0126 16:37:57.185823 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4656d6b-2378-4862-a99c-95a0836df0a4" path="/var/lib/kubelet/pods/f4656d6b-2378-4862-a99c-95a0836df0a4/volumes" Jan 26 16:37:57 crc kubenswrapper[4680]: I0126 16:37:57.187201 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f853c1f2-58ca-4001-828d-4fc087046a68" path="/var/lib/kubelet/pods/f853c1f2-58ca-4001-828d-4fc087046a68/volumes" Jan 26 16:38:00 crc kubenswrapper[4680]: I0126 16:38:00.170223 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:38:00 crc kubenswrapper[4680]: E0126 16:38:00.170469 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:38:14 crc kubenswrapper[4680]: I0126 16:38:14.169562 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:38:14 crc kubenswrapper[4680]: E0126 16:38:14.170343 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:38:29 crc kubenswrapper[4680]: I0126 16:38:29.170174 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:38:29 crc kubenswrapper[4680]: I0126 16:38:29.578516 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"2d5778ad3d975241861671ada2343061d63ca99f24cc8b62af57c10230c757bf"} Jan 26 16:38:31 crc kubenswrapper[4680]: I0126 16:38:31.040179 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hj79q"] Jan 26 16:38:31 crc kubenswrapper[4680]: I0126 16:38:31.048209 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hj79q"] Jan 26 16:38:31 crc kubenswrapper[4680]: I0126 16:38:31.180855 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0092c6fa-128e-474b-b8d0-379592af1dc2" path="/var/lib/kubelet/pods/0092c6fa-128e-474b-b8d0-379592af1dc2/volumes" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.414220 4680 scope.go:117] "RemoveContainer" containerID="10f2c67a6b7581065cd96c2a8d60122ed50c4f1cc15660fb234b3e0b91a96bdb" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.452783 4680 scope.go:117] "RemoveContainer" containerID="b463d9643621f1dd2d48656d6e2166ff927ec9ad1dab00d4d69c4ee29afd9f38" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.497155 4680 scope.go:117] "RemoveContainer" containerID="72f877ec0cbeb31efb4dd542d649a5457577af97b31b1e0d96ec497c5dd4e8ef" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.556432 4680 scope.go:117] "RemoveContainer" containerID="fc1b94356735a44d1d9873115f8f4aaee27a6d6a41d1f2f223a4b3748e668b17" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.607446 4680 scope.go:117] "RemoveContainer" containerID="d185c77b64e6badf59bf2600c5dc6d637ba6d226bcff51418270e12c22a12ce5" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.642802 4680 scope.go:117] "RemoveContainer" containerID="05211c639382d5429aa8b68ca5e0718e4d48dfd278dc0b9aca3b0f143482182c" Jan 26 16:38:43 crc kubenswrapper[4680]: I0126 16:38:43.699877 4680 scope.go:117] "RemoveContainer" containerID="c96babf6006fd7626dbc1566134397a80f806ddd9203379fe6ac4640b00053eb" Jan 26 16:38:57 crc kubenswrapper[4680]: I0126 16:38:57.797244 4680 generic.go:334] "Generic (PLEG): container finished" podID="904fe048-4e1b-406e-910f-3fcd8f6e3842" containerID="c87547b07970f34d8054c476317b18e98c4f95e65d8503272562e249a5facfa7" exitCode=0 Jan 26 16:38:57 crc kubenswrapper[4680]: I0126 16:38:57.797326 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" event={"ID":"904fe048-4e1b-406e-910f-3fcd8f6e3842","Type":"ContainerDied","Data":"c87547b07970f34d8054c476317b18e98c4f95e65d8503272562e249a5facfa7"} Jan 26 16:38:58 crc kubenswrapper[4680]: I0126 16:38:58.044808 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn92r"] Jan 26 16:38:58 crc kubenswrapper[4680]: I0126 16:38:58.060959 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn92r"] Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.035404 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4br2w"] Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.050872 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4br2w"] Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.184173 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa03558-d4dc-4769-946d-c017e5d8d767" path="/var/lib/kubelet/pods/1fa03558-d4dc-4769-946d-c017e5d8d767/volumes" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.185631 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53297ff-fd86-4e5a-8e8b-c1c44c9118b6" path="/var/lib/kubelet/pods/e53297ff-fd86-4e5a-8e8b-c1c44c9118b6/volumes" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.255058 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.394724 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmlm7\" (UniqueName: \"kubernetes.io/projected/904fe048-4e1b-406e-910f-3fcd8f6e3842-kube-api-access-dmlm7\") pod \"904fe048-4e1b-406e-910f-3fcd8f6e3842\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.394866 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-inventory\") pod \"904fe048-4e1b-406e-910f-3fcd8f6e3842\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.395009 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-ssh-key-openstack-edpm-ipam\") pod \"904fe048-4e1b-406e-910f-3fcd8f6e3842\" (UID: \"904fe048-4e1b-406e-910f-3fcd8f6e3842\") " Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.406655 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904fe048-4e1b-406e-910f-3fcd8f6e3842-kube-api-access-dmlm7" (OuterVolumeSpecName: "kube-api-access-dmlm7") pod "904fe048-4e1b-406e-910f-3fcd8f6e3842" (UID: "904fe048-4e1b-406e-910f-3fcd8f6e3842"). InnerVolumeSpecName "kube-api-access-dmlm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.431413 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "904fe048-4e1b-406e-910f-3fcd8f6e3842" (UID: "904fe048-4e1b-406e-910f-3fcd8f6e3842"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.431813 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-inventory" (OuterVolumeSpecName: "inventory") pod "904fe048-4e1b-406e-910f-3fcd8f6e3842" (UID: "904fe048-4e1b-406e-910f-3fcd8f6e3842"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.497748 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.497983 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/904fe048-4e1b-406e-910f-3fcd8f6e3842-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.498121 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmlm7\" (UniqueName: \"kubernetes.io/projected/904fe048-4e1b-406e-910f-3fcd8f6e3842-kube-api-access-dmlm7\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.818544 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" event={"ID":"904fe048-4e1b-406e-910f-3fcd8f6e3842","Type":"ContainerDied","Data":"74acaba4e20258adc33f426af1e46f960c029c5c9f82d1d6a777f5263c3ca99a"} Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.818590 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74acaba4e20258adc33f426af1e46f960c029c5c9f82d1d6a777f5263c3ca99a" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.818591 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qjhr" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.916605 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n"] Jan 26 16:38:59 crc kubenswrapper[4680]: E0126 16:38:59.916979 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904fe048-4e1b-406e-910f-3fcd8f6e3842" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.916998 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="904fe048-4e1b-406e-910f-3fcd8f6e3842" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.917457 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="904fe048-4e1b-406e-910f-3fcd8f6e3842" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.918093 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.920891 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.921259 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.921513 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.922248 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:38:59 crc kubenswrapper[4680]: I0126 16:38:59.928620 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n"] Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.016454 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.016512 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrr4\" (UniqueName: \"kubernetes.io/projected/333c83f6-788d-4024-99ff-1a0af02fc676-kube-api-access-pvrr4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.016685 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.118669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrr4\" (UniqueName: \"kubernetes.io/projected/333c83f6-788d-4024-99ff-1a0af02fc676-kube-api-access-pvrr4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.118800 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.118871 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.125973 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.130650 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.144621 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrr4\" (UniqueName: \"kubernetes.io/projected/333c83f6-788d-4024-99ff-1a0af02fc676-kube-api-access-pvrr4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.235452 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.786092 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n"] Jan 26 16:39:00 crc kubenswrapper[4680]: I0126 16:39:00.826447 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" event={"ID":"333c83f6-788d-4024-99ff-1a0af02fc676","Type":"ContainerStarted","Data":"57ee1707859c77143d8812bd7e01a59e2f4fabc4db8231b20ff0c6cf847199f7"} Jan 26 16:39:02 crc kubenswrapper[4680]: I0126 16:39:02.842777 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" event={"ID":"333c83f6-788d-4024-99ff-1a0af02fc676","Type":"ContainerStarted","Data":"2221a66cfa7a8a10c18a76a5fe8238e723d0b7149b1315d1f0a8434e9f29e33b"} Jan 26 16:39:02 crc kubenswrapper[4680]: I0126 16:39:02.863358 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" podStartSLOduration=2.736261684 podStartE2EDuration="3.863341668s" podCreationTimestamp="2026-01-26 16:38:59 +0000 UTC" firstStartedPulling="2026-01-26 16:39:00.785254866 +0000 UTC m=+2015.946527135" lastFinishedPulling="2026-01-26 16:39:01.91233485 +0000 UTC m=+2017.073607119" observedRunningTime="2026-01-26 16:39:02.854698803 +0000 UTC m=+2018.015971072" watchObservedRunningTime="2026-01-26 16:39:02.863341668 +0000 UTC m=+2018.024613937" Jan 26 16:39:07 crc kubenswrapper[4680]: I0126 16:39:07.882084 4680 generic.go:334] "Generic (PLEG): container finished" podID="333c83f6-788d-4024-99ff-1a0af02fc676" containerID="2221a66cfa7a8a10c18a76a5fe8238e723d0b7149b1315d1f0a8434e9f29e33b" exitCode=0 Jan 26 16:39:07 crc kubenswrapper[4680]: I0126 16:39:07.882179 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" event={"ID":"333c83f6-788d-4024-99ff-1a0af02fc676","Type":"ContainerDied","Data":"2221a66cfa7a8a10c18a76a5fe8238e723d0b7149b1315d1f0a8434e9f29e33b"} Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.301642 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.400224 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-inventory\") pod \"333c83f6-788d-4024-99ff-1a0af02fc676\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.400358 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvrr4\" (UniqueName: \"kubernetes.io/projected/333c83f6-788d-4024-99ff-1a0af02fc676-kube-api-access-pvrr4\") pod \"333c83f6-788d-4024-99ff-1a0af02fc676\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.400606 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-ssh-key-openstack-edpm-ipam\") pod \"333c83f6-788d-4024-99ff-1a0af02fc676\" (UID: \"333c83f6-788d-4024-99ff-1a0af02fc676\") " Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.408303 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333c83f6-788d-4024-99ff-1a0af02fc676-kube-api-access-pvrr4" (OuterVolumeSpecName: "kube-api-access-pvrr4") pod "333c83f6-788d-4024-99ff-1a0af02fc676" (UID: "333c83f6-788d-4024-99ff-1a0af02fc676"). InnerVolumeSpecName "kube-api-access-pvrr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.431513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-inventory" (OuterVolumeSpecName: "inventory") pod "333c83f6-788d-4024-99ff-1a0af02fc676" (UID: "333c83f6-788d-4024-99ff-1a0af02fc676"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.433911 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "333c83f6-788d-4024-99ff-1a0af02fc676" (UID: "333c83f6-788d-4024-99ff-1a0af02fc676"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.503851 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.503887 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/333c83f6-788d-4024-99ff-1a0af02fc676-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.503896 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvrr4\" (UniqueName: \"kubernetes.io/projected/333c83f6-788d-4024-99ff-1a0af02fc676-kube-api-access-pvrr4\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.908242 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" event={"ID":"333c83f6-788d-4024-99ff-1a0af02fc676","Type":"ContainerDied","Data":"57ee1707859c77143d8812bd7e01a59e2f4fabc4db8231b20ff0c6cf847199f7"} Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.908282 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ee1707859c77143d8812bd7e01a59e2f4fabc4db8231b20ff0c6cf847199f7" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.908291 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cdz9n" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.986787 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g"] Jan 26 16:39:09 crc kubenswrapper[4680]: E0126 16:39:09.987222 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333c83f6-788d-4024-99ff-1a0af02fc676" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.987243 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="333c83f6-788d-4024-99ff-1a0af02fc676" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.987500 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="333c83f6-788d-4024-99ff-1a0af02fc676" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.988237 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.989847 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.990379 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.992657 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:39:09 crc kubenswrapper[4680]: I0126 16:39:09.992944 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.008185 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g"] Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.112834 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.113089 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.113239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6m2\" (UniqueName: \"kubernetes.io/projected/3663713c-9be0-4134-a59b-9e038a431a57-kube-api-access-xs6m2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.214638 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs6m2\" (UniqueName: \"kubernetes.io/projected/3663713c-9be0-4134-a59b-9e038a431a57-kube-api-access-xs6m2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.214733 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.214780 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.218594 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.222348 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.233038 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs6m2\" (UniqueName: \"kubernetes.io/projected/3663713c-9be0-4134-a59b-9e038a431a57-kube-api-access-xs6m2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mms7g\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.305589 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.851498 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g"] Jan 26 16:39:10 crc kubenswrapper[4680]: I0126 16:39:10.920085 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" event={"ID":"3663713c-9be0-4134-a59b-9e038a431a57","Type":"ContainerStarted","Data":"722de99479615c7802e957fdf33b7d67de4a66e807b00797e9bc38a2607ee301"} Jan 26 16:39:11 crc kubenswrapper[4680]: I0126 16:39:11.928371 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" event={"ID":"3663713c-9be0-4134-a59b-9e038a431a57","Type":"ContainerStarted","Data":"ef7750d23d58b14557931d01faff1759b05e2609dabb9a540b73b4f74b461a05"} Jan 26 16:39:11 crc kubenswrapper[4680]: I0126 16:39:11.952544 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" podStartSLOduration=2.488784064 podStartE2EDuration="2.952524226s" podCreationTimestamp="2026-01-26 16:39:09 +0000 UTC" firstStartedPulling="2026-01-26 16:39:10.854537305 +0000 UTC m=+2026.015809574" lastFinishedPulling="2026-01-26 16:39:11.318277467 +0000 UTC m=+2026.479549736" observedRunningTime="2026-01-26 16:39:11.950475288 +0000 UTC m=+2027.111747557" watchObservedRunningTime="2026-01-26 16:39:11.952524226 +0000 UTC m=+2027.113796495" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.281620 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lx5sc"] Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.285541 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.297924 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lx5sc"] Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.438612 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-utilities\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.438739 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hqd\" (UniqueName: \"kubernetes.io/projected/69ae56a1-6e6c-43b7-91fa-913499f15a6e-kube-api-access-d4hqd\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.438841 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-catalog-content\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.540614 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-catalog-content\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.540723 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-utilities\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.540829 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hqd\" (UniqueName: \"kubernetes.io/projected/69ae56a1-6e6c-43b7-91fa-913499f15a6e-kube-api-access-d4hqd\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.541142 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-catalog-content\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.541430 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-utilities\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.567336 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hqd\" (UniqueName: \"kubernetes.io/projected/69ae56a1-6e6c-43b7-91fa-913499f15a6e-kube-api-access-d4hqd\") pod \"redhat-operators-lx5sc\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:27 crc kubenswrapper[4680]: I0126 16:39:27.604852 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:28 crc kubenswrapper[4680]: I0126 16:39:28.085341 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lx5sc"] Jan 26 16:39:29 crc kubenswrapper[4680]: I0126 16:39:29.053815 4680 generic.go:334] "Generic (PLEG): container finished" podID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerID="126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1" exitCode=0 Jan 26 16:39:29 crc kubenswrapper[4680]: I0126 16:39:29.053861 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerDied","Data":"126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1"} Jan 26 16:39:29 crc kubenswrapper[4680]: I0126 16:39:29.054122 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerStarted","Data":"328090099df90be15e770e93b885ce45defb0a1463beda4a882ecdfaf618833c"} Jan 26 16:39:30 crc kubenswrapper[4680]: I0126 16:39:30.062864 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerStarted","Data":"a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1"} Jan 26 16:39:35 crc kubenswrapper[4680]: I0126 16:39:35.111389 4680 generic.go:334] "Generic (PLEG): container finished" podID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerID="a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1" exitCode=0 Jan 26 16:39:35 crc kubenswrapper[4680]: I0126 16:39:35.111656 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerDied","Data":"a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1"} Jan 26 16:39:37 crc kubenswrapper[4680]: I0126 16:39:37.132267 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerStarted","Data":"bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793"} Jan 26 16:39:37 crc kubenswrapper[4680]: I0126 16:39:37.161682 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lx5sc" podStartSLOduration=3.074803381 podStartE2EDuration="10.161665951s" podCreationTimestamp="2026-01-26 16:39:27 +0000 UTC" firstStartedPulling="2026-01-26 16:39:29.057261298 +0000 UTC m=+2044.218533567" lastFinishedPulling="2026-01-26 16:39:36.144123868 +0000 UTC m=+2051.305396137" observedRunningTime="2026-01-26 16:39:37.160255441 +0000 UTC m=+2052.321527720" watchObservedRunningTime="2026-01-26 16:39:37.161665951 +0000 UTC m=+2052.322938220" Jan 26 16:39:37 crc kubenswrapper[4680]: I0126 16:39:37.605473 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:37 crc kubenswrapper[4680]: I0126 16:39:37.605624 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:38 crc kubenswrapper[4680]: I0126 16:39:38.671336 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lx5sc" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="registry-server" probeResult="failure" output=< Jan 26 16:39:38 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:39:38 crc kubenswrapper[4680]: > Jan 26 16:39:42 crc kubenswrapper[4680]: I0126 16:39:42.038594 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ckt42"] Jan 26 16:39:42 crc kubenswrapper[4680]: I0126 16:39:42.048890 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ckt42"] Jan 26 16:39:43 crc kubenswrapper[4680]: I0126 16:39:43.180266 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8236f9fb-94da-4bd0-8f13-e2ca69b30db5" path="/var/lib/kubelet/pods/8236f9fb-94da-4bd0-8f13-e2ca69b30db5/volumes" Jan 26 16:39:43 crc kubenswrapper[4680]: I0126 16:39:43.840847 4680 scope.go:117] "RemoveContainer" containerID="c5dfdfd0db4d5b1c650aff2fea418a3933e50e1acd32ecfe36bc8ce9a9e7e648" Jan 26 16:39:43 crc kubenswrapper[4680]: I0126 16:39:43.866244 4680 scope.go:117] "RemoveContainer" containerID="ae551eabb13924844bd11aea1b2a269b000f951eea87b1cc39f9e2300fb7d547" Jan 26 16:39:43 crc kubenswrapper[4680]: I0126 16:39:43.964854 4680 scope.go:117] "RemoveContainer" containerID="cf31bc85f65133f78585954f3ee54886542add33746b0da24a7e5a7549a743c0" Jan 26 16:39:48 crc kubenswrapper[4680]: I0126 16:39:48.657717 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lx5sc" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="registry-server" probeResult="failure" output=< Jan 26 16:39:48 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:39:48 crc kubenswrapper[4680]: > Jan 26 16:39:56 crc kubenswrapper[4680]: I0126 16:39:56.303393 4680 generic.go:334] "Generic (PLEG): container finished" podID="3663713c-9be0-4134-a59b-9e038a431a57" containerID="ef7750d23d58b14557931d01faff1759b05e2609dabb9a540b73b4f74b461a05" exitCode=0 Jan 26 16:39:56 crc kubenswrapper[4680]: I0126 16:39:56.303463 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" event={"ID":"3663713c-9be0-4134-a59b-9e038a431a57","Type":"ContainerDied","Data":"ef7750d23d58b14557931d01faff1759b05e2609dabb9a540b73b4f74b461a05"} Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.657054 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.708976 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.739379 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.911953 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs6m2\" (UniqueName: \"kubernetes.io/projected/3663713c-9be0-4134-a59b-9e038a431a57-kube-api-access-xs6m2\") pod \"3663713c-9be0-4134-a59b-9e038a431a57\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.912364 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-ssh-key-openstack-edpm-ipam\") pod \"3663713c-9be0-4134-a59b-9e038a431a57\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.912487 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-inventory\") pod \"3663713c-9be0-4134-a59b-9e038a431a57\" (UID: \"3663713c-9be0-4134-a59b-9e038a431a57\") " Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.918248 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3663713c-9be0-4134-a59b-9e038a431a57-kube-api-access-xs6m2" (OuterVolumeSpecName: "kube-api-access-xs6m2") pod "3663713c-9be0-4134-a59b-9e038a431a57" (UID: "3663713c-9be0-4134-a59b-9e038a431a57"). InnerVolumeSpecName "kube-api-access-xs6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.937808 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-inventory" (OuterVolumeSpecName: "inventory") pod "3663713c-9be0-4134-a59b-9e038a431a57" (UID: "3663713c-9be0-4134-a59b-9e038a431a57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:39:57 crc kubenswrapper[4680]: I0126 16:39:57.940208 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3663713c-9be0-4134-a59b-9e038a431a57" (UID: "3663713c-9be0-4134-a59b-9e038a431a57"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.014113 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs6m2\" (UniqueName: \"kubernetes.io/projected/3663713c-9be0-4134-a59b-9e038a431a57-kube-api-access-xs6m2\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.014142 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.014163 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3663713c-9be0-4134-a59b-9e038a431a57-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.324272 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.328371 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mms7g" event={"ID":"3663713c-9be0-4134-a59b-9e038a431a57","Type":"ContainerDied","Data":"722de99479615c7802e957fdf33b7d67de4a66e807b00797e9bc38a2607ee301"} Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.328421 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="722de99479615c7802e957fdf33b7d67de4a66e807b00797e9bc38a2607ee301" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.511335 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lx5sc"] Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.839753 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks"] Jan 26 16:39:58 crc kubenswrapper[4680]: E0126 16:39:58.840166 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3663713c-9be0-4134-a59b-9e038a431a57" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.840180 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3663713c-9be0-4134-a59b-9e038a431a57" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.840361 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3663713c-9be0-4134-a59b-9e038a431a57" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.840976 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.848678 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.858782 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.859114 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.859314 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.862487 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks"] Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.935636 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7p6j\" (UniqueName: \"kubernetes.io/projected/586e12d8-ffad-4c05-b812-a98c83b3de4d-kube-api-access-s7p6j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.935732 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:58 crc kubenswrapper[4680]: I0126 16:39:58.936255 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.038475 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7p6j\" (UniqueName: \"kubernetes.io/projected/586e12d8-ffad-4c05-b812-a98c83b3de4d-kube-api-access-s7p6j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.038695 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.038926 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.043196 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.049607 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.057157 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7p6j\" (UniqueName: \"kubernetes.io/projected/586e12d8-ffad-4c05-b812-a98c83b3de4d-kube-api-access-s7p6j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8cks\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.185249 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.331014 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lx5sc" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="registry-server" containerID="cri-o://bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793" gracePeriod=2 Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.684312 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.741575 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks"] Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.856518 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4hqd\" (UniqueName: \"kubernetes.io/projected/69ae56a1-6e6c-43b7-91fa-913499f15a6e-kube-api-access-d4hqd\") pod \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.856632 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-utilities\") pod \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.856688 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-catalog-content\") pod \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\" (UID: \"69ae56a1-6e6c-43b7-91fa-913499f15a6e\") " Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.857647 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-utilities" (OuterVolumeSpecName: "utilities") pod "69ae56a1-6e6c-43b7-91fa-913499f15a6e" (UID: "69ae56a1-6e6c-43b7-91fa-913499f15a6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.863182 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ae56a1-6e6c-43b7-91fa-913499f15a6e-kube-api-access-d4hqd" (OuterVolumeSpecName: "kube-api-access-d4hqd") pod "69ae56a1-6e6c-43b7-91fa-913499f15a6e" (UID: "69ae56a1-6e6c-43b7-91fa-913499f15a6e"). InnerVolumeSpecName "kube-api-access-d4hqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.959163 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4hqd\" (UniqueName: \"kubernetes.io/projected/69ae56a1-6e6c-43b7-91fa-913499f15a6e-kube-api-access-d4hqd\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.959544 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:39:59 crc kubenswrapper[4680]: I0126 16:39:59.969820 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69ae56a1-6e6c-43b7-91fa-913499f15a6e" (UID: "69ae56a1-6e6c-43b7-91fa-913499f15a6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.061327 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69ae56a1-6e6c-43b7-91fa-913499f15a6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.339822 4680 generic.go:334] "Generic (PLEG): container finished" podID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerID="bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793" exitCode=0 Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.339873 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx5sc" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.339898 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerDied","Data":"bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793"} Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.339929 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx5sc" event={"ID":"69ae56a1-6e6c-43b7-91fa-913499f15a6e","Type":"ContainerDied","Data":"328090099df90be15e770e93b885ce45defb0a1463beda4a882ecdfaf618833c"} Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.339945 4680 scope.go:117] "RemoveContainer" containerID="bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.341728 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" event={"ID":"586e12d8-ffad-4c05-b812-a98c83b3de4d","Type":"ContainerStarted","Data":"256e8483b4784f88b26dad7c0bc19812d43f286ee0fc75faec509a41534a4708"} Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.369730 4680 scope.go:117] "RemoveContainer" containerID="a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.372471 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lx5sc"] Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.380412 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lx5sc"] Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.396005 4680 scope.go:117] "RemoveContainer" containerID="126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.445166 4680 scope.go:117] "RemoveContainer" containerID="bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793" Jan 26 16:40:00 crc kubenswrapper[4680]: E0126 16:40:00.446104 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793\": container with ID starting with bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793 not found: ID does not exist" containerID="bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.446303 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793"} err="failed to get container status \"bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793\": rpc error: code = NotFound desc = could not find container \"bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793\": container with ID starting with bdb026d8f46c353d3d70d2ebcdad8b59abfae5b507266915d4a0266c0a6f6793 not found: ID does not exist" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.446334 4680 scope.go:117] "RemoveContainer" containerID="a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1" Jan 26 16:40:00 crc kubenswrapper[4680]: E0126 16:40:00.446667 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1\": container with ID starting with a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1 not found: ID does not exist" containerID="a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.446695 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1"} err="failed to get container status \"a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1\": rpc error: code = NotFound desc = could not find container \"a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1\": container with ID starting with a29db90382391710b0099d48a78aa844adad645ea537002d48bbc3ad5faf41e1 not found: ID does not exist" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.446715 4680 scope.go:117] "RemoveContainer" containerID="126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1" Jan 26 16:40:00 crc kubenswrapper[4680]: E0126 16:40:00.447052 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1\": container with ID starting with 126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1 not found: ID does not exist" containerID="126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1" Jan 26 16:40:00 crc kubenswrapper[4680]: I0126 16:40:00.447109 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1"} err="failed to get container status \"126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1\": rpc error: code = NotFound desc = could not find container \"126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1\": container with ID starting with 126ee4bba2c216f67fc876333fa80461e31f4b6043b20975b18c665bff83d5b1 not found: ID does not exist" Jan 26 16:40:01 crc kubenswrapper[4680]: I0126 16:40:01.179854 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" path="/var/lib/kubelet/pods/69ae56a1-6e6c-43b7-91fa-913499f15a6e/volumes" Jan 26 16:40:01 crc kubenswrapper[4680]: I0126 16:40:01.351514 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" event={"ID":"586e12d8-ffad-4c05-b812-a98c83b3de4d","Type":"ContainerStarted","Data":"17a5c9be845527a75eb6f517bad3a598c1fe263f3836ec15e232c0cc2794a393"} Jan 26 16:40:01 crc kubenswrapper[4680]: I0126 16:40:01.366995 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" podStartSLOduration=2.924472254 podStartE2EDuration="3.366977554s" podCreationTimestamp="2026-01-26 16:39:58 +0000 UTC" firstStartedPulling="2026-01-26 16:39:59.750346368 +0000 UTC m=+2074.911618637" lastFinishedPulling="2026-01-26 16:40:00.192851668 +0000 UTC m=+2075.354123937" observedRunningTime="2026-01-26 16:40:01.364026411 +0000 UTC m=+2076.525298670" watchObservedRunningTime="2026-01-26 16:40:01.366977554 +0000 UTC m=+2076.528249823" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.019558 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mdmnl"] Jan 26 16:40:05 crc kubenswrapper[4680]: E0126 16:40:05.025804 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="extract-utilities" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.025828 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="extract-utilities" Jan 26 16:40:05 crc kubenswrapper[4680]: E0126 16:40:05.025877 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="extract-content" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.025885 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="extract-content" Jan 26 16:40:05 crc kubenswrapper[4680]: E0126 16:40:05.025897 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="registry-server" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.025904 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="registry-server" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.026164 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ae56a1-6e6c-43b7-91fa-913499f15a6e" containerName="registry-server" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.027811 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.032279 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mdmnl"] Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.156185 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-utilities\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.156541 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqkpq\" (UniqueName: \"kubernetes.io/projected/86b2fcdb-3e89-4319-b374-2b85b49ba501-kube-api-access-vqkpq\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.156572 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-catalog-content\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.258356 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-utilities\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.258403 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqkpq\" (UniqueName: \"kubernetes.io/projected/86b2fcdb-3e89-4319-b374-2b85b49ba501-kube-api-access-vqkpq\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.258440 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-catalog-content\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.258904 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-utilities\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.259001 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-catalog-content\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.294446 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqkpq\" (UniqueName: \"kubernetes.io/projected/86b2fcdb-3e89-4319-b374-2b85b49ba501-kube-api-access-vqkpq\") pod \"community-operators-mdmnl\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.350293 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:05 crc kubenswrapper[4680]: I0126 16:40:05.932465 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mdmnl"] Jan 26 16:40:06 crc kubenswrapper[4680]: I0126 16:40:06.392799 4680 generic.go:334] "Generic (PLEG): container finished" podID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerID="e2ce9eda7017202b71fb10d6147fa675febf2371b16b2382ad8f71401436e756" exitCode=0 Jan 26 16:40:06 crc kubenswrapper[4680]: I0126 16:40:06.392859 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerDied","Data":"e2ce9eda7017202b71fb10d6147fa675febf2371b16b2382ad8f71401436e756"} Jan 26 16:40:06 crc kubenswrapper[4680]: I0126 16:40:06.393157 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerStarted","Data":"4f2ccc51f6f5f12d66a3b1ef61c5ba78b8775fdeeb7306938dfe84a976031793"} Jan 26 16:40:07 crc kubenswrapper[4680]: I0126 16:40:07.402875 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerStarted","Data":"47ee27a3f99b4cb08ec0250ead566da68e3a64e0f36f0ac35e7aab62e00bc84b"} Jan 26 16:40:08 crc kubenswrapper[4680]: I0126 16:40:08.411406 4680 generic.go:334] "Generic (PLEG): container finished" podID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerID="47ee27a3f99b4cb08ec0250ead566da68e3a64e0f36f0ac35e7aab62e00bc84b" exitCode=0 Jan 26 16:40:08 crc kubenswrapper[4680]: I0126 16:40:08.411515 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerDied","Data":"47ee27a3f99b4cb08ec0250ead566da68e3a64e0f36f0ac35e7aab62e00bc84b"} Jan 26 16:40:09 crc kubenswrapper[4680]: I0126 16:40:09.421784 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerStarted","Data":"5820b8df61c1256d863f6ccda2ec46c1363afd4808bbe0f0e71f2d7533c6f0ea"} Jan 26 16:40:09 crc kubenswrapper[4680]: I0126 16:40:09.450319 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mdmnl" podStartSLOduration=2.896875218 podStartE2EDuration="5.450301s" podCreationTimestamp="2026-01-26 16:40:04 +0000 UTC" firstStartedPulling="2026-01-26 16:40:06.396214771 +0000 UTC m=+2081.557487040" lastFinishedPulling="2026-01-26 16:40:08.949640553 +0000 UTC m=+2084.110912822" observedRunningTime="2026-01-26 16:40:09.438586658 +0000 UTC m=+2084.599858927" watchObservedRunningTime="2026-01-26 16:40:09.450301 +0000 UTC m=+2084.611573269" Jan 26 16:40:15 crc kubenswrapper[4680]: I0126 16:40:15.351383 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:15 crc kubenswrapper[4680]: I0126 16:40:15.351952 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:15 crc kubenswrapper[4680]: I0126 16:40:15.399842 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:15 crc kubenswrapper[4680]: I0126 16:40:15.513388 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:15 crc kubenswrapper[4680]: I0126 16:40:15.640385 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mdmnl"] Jan 26 16:40:17 crc kubenswrapper[4680]: I0126 16:40:17.499591 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mdmnl" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="registry-server" containerID="cri-o://5820b8df61c1256d863f6ccda2ec46c1363afd4808bbe0f0e71f2d7533c6f0ea" gracePeriod=2 Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.519587 4680 generic.go:334] "Generic (PLEG): container finished" podID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerID="5820b8df61c1256d863f6ccda2ec46c1363afd4808bbe0f0e71f2d7533c6f0ea" exitCode=0 Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.519638 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerDied","Data":"5820b8df61c1256d863f6ccda2ec46c1363afd4808bbe0f0e71f2d7533c6f0ea"} Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.519926 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdmnl" event={"ID":"86b2fcdb-3e89-4319-b374-2b85b49ba501","Type":"ContainerDied","Data":"4f2ccc51f6f5f12d66a3b1ef61c5ba78b8775fdeeb7306938dfe84a976031793"} Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.519939 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2ccc51f6f5f12d66a3b1ef61c5ba78b8775fdeeb7306938dfe84a976031793" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.575949 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.741204 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-utilities\") pod \"86b2fcdb-3e89-4319-b374-2b85b49ba501\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.741695 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqkpq\" (UniqueName: \"kubernetes.io/projected/86b2fcdb-3e89-4319-b374-2b85b49ba501-kube-api-access-vqkpq\") pod \"86b2fcdb-3e89-4319-b374-2b85b49ba501\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.741863 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-catalog-content\") pod \"86b2fcdb-3e89-4319-b374-2b85b49ba501\" (UID: \"86b2fcdb-3e89-4319-b374-2b85b49ba501\") " Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.742166 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-utilities" (OuterVolumeSpecName: "utilities") pod "86b2fcdb-3e89-4319-b374-2b85b49ba501" (UID: "86b2fcdb-3e89-4319-b374-2b85b49ba501"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.742341 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.747311 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b2fcdb-3e89-4319-b374-2b85b49ba501-kube-api-access-vqkpq" (OuterVolumeSpecName: "kube-api-access-vqkpq") pod "86b2fcdb-3e89-4319-b374-2b85b49ba501" (UID: "86b2fcdb-3e89-4319-b374-2b85b49ba501"). InnerVolumeSpecName "kube-api-access-vqkpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.793908 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86b2fcdb-3e89-4319-b374-2b85b49ba501" (UID: "86b2fcdb-3e89-4319-b374-2b85b49ba501"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.844470 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqkpq\" (UniqueName: \"kubernetes.io/projected/86b2fcdb-3e89-4319-b374-2b85b49ba501-kube-api-access-vqkpq\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:18 crc kubenswrapper[4680]: I0126 16:40:18.844502 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b2fcdb-3e89-4319-b374-2b85b49ba501-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:40:19 crc kubenswrapper[4680]: I0126 16:40:19.526713 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdmnl" Jan 26 16:40:19 crc kubenswrapper[4680]: I0126 16:40:19.546208 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mdmnl"] Jan 26 16:40:19 crc kubenswrapper[4680]: I0126 16:40:19.556899 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mdmnl"] Jan 26 16:40:21 crc kubenswrapper[4680]: I0126 16:40:21.179859 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" path="/var/lib/kubelet/pods/86b2fcdb-3e89-4319-b374-2b85b49ba501/volumes" Jan 26 16:40:46 crc kubenswrapper[4680]: I0126 16:40:46.981041 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:40:46 crc kubenswrapper[4680]: I0126 16:40:46.981521 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:41:00 crc kubenswrapper[4680]: I0126 16:41:00.845162 4680 generic.go:334] "Generic (PLEG): container finished" podID="586e12d8-ffad-4c05-b812-a98c83b3de4d" containerID="17a5c9be845527a75eb6f517bad3a598c1fe263f3836ec15e232c0cc2794a393" exitCode=0 Jan 26 16:41:00 crc kubenswrapper[4680]: I0126 16:41:00.845214 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" event={"ID":"586e12d8-ffad-4c05-b812-a98c83b3de4d","Type":"ContainerDied","Data":"17a5c9be845527a75eb6f517bad3a598c1fe263f3836ec15e232c0cc2794a393"} Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.322955 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.403641 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-ssh-key-openstack-edpm-ipam\") pod \"586e12d8-ffad-4c05-b812-a98c83b3de4d\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.404149 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7p6j\" (UniqueName: \"kubernetes.io/projected/586e12d8-ffad-4c05-b812-a98c83b3de4d-kube-api-access-s7p6j\") pod \"586e12d8-ffad-4c05-b812-a98c83b3de4d\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.404238 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-inventory\") pod \"586e12d8-ffad-4c05-b812-a98c83b3de4d\" (UID: \"586e12d8-ffad-4c05-b812-a98c83b3de4d\") " Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.411262 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586e12d8-ffad-4c05-b812-a98c83b3de4d-kube-api-access-s7p6j" (OuterVolumeSpecName: "kube-api-access-s7p6j") pod "586e12d8-ffad-4c05-b812-a98c83b3de4d" (UID: "586e12d8-ffad-4c05-b812-a98c83b3de4d"). InnerVolumeSpecName "kube-api-access-s7p6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.431965 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-inventory" (OuterVolumeSpecName: "inventory") pod "586e12d8-ffad-4c05-b812-a98c83b3de4d" (UID: "586e12d8-ffad-4c05-b812-a98c83b3de4d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.433260 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "586e12d8-ffad-4c05-b812-a98c83b3de4d" (UID: "586e12d8-ffad-4c05-b812-a98c83b3de4d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.506727 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.506756 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7p6j\" (UniqueName: \"kubernetes.io/projected/586e12d8-ffad-4c05-b812-a98c83b3de4d-kube-api-access-s7p6j\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.506765 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/586e12d8-ffad-4c05-b812-a98c83b3de4d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.869203 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" event={"ID":"586e12d8-ffad-4c05-b812-a98c83b3de4d","Type":"ContainerDied","Data":"256e8483b4784f88b26dad7c0bc19812d43f286ee0fc75faec509a41534a4708"} Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.869249 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="256e8483b4784f88b26dad7c0bc19812d43f286ee0fc75faec509a41534a4708" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.869264 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8cks" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.961514 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-llf4l"] Jan 26 16:41:02 crc kubenswrapper[4680]: E0126 16:41:02.961883 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="registry-server" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.961899 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="registry-server" Jan 26 16:41:02 crc kubenswrapper[4680]: E0126 16:41:02.961915 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="extract-content" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.961922 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="extract-content" Jan 26 16:41:02 crc kubenswrapper[4680]: E0126 16:41:02.961955 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="extract-utilities" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.961962 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="extract-utilities" Jan 26 16:41:02 crc kubenswrapper[4680]: E0126 16:41:02.961975 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586e12d8-ffad-4c05-b812-a98c83b3de4d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.961982 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="586e12d8-ffad-4c05-b812-a98c83b3de4d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.962163 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="586e12d8-ffad-4c05-b812-a98c83b3de4d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.962193 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b2fcdb-3e89-4319-b374-2b85b49ba501" containerName="registry-server" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.962760 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.967204 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.967281 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.967423 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.972358 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:41:02 crc kubenswrapper[4680]: I0126 16:41:02.986848 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-llf4l"] Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.117337 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.117462 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkgbg\" (UniqueName: \"kubernetes.io/projected/2039e5e4-eaf0-444e-a3e0-210b72b40483-kube-api-access-kkgbg\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.117743 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.219463 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.219542 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkgbg\" (UniqueName: \"kubernetes.io/projected/2039e5e4-eaf0-444e-a3e0-210b72b40483-kube-api-access-kkgbg\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.219630 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.223963 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.224672 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.244673 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkgbg\" (UniqueName: \"kubernetes.io/projected/2039e5e4-eaf0-444e-a3e0-210b72b40483-kube-api-access-kkgbg\") pod \"ssh-known-hosts-edpm-deployment-llf4l\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.281194 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.796676 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-llf4l"] Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.808733 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:41:03 crc kubenswrapper[4680]: I0126 16:41:03.926417 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" event={"ID":"2039e5e4-eaf0-444e-a3e0-210b72b40483","Type":"ContainerStarted","Data":"b9acfbd58437580fa5ccc0ff7d3bc14745fb41b925487a57f76a0f92d28b1a59"} Jan 26 16:41:04 crc kubenswrapper[4680]: I0126 16:41:04.937345 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" event={"ID":"2039e5e4-eaf0-444e-a3e0-210b72b40483","Type":"ContainerStarted","Data":"4600c44e354c1490a5fa5b8ec30f9078c5d721f4d0f403530cd80b49ddd9374f"} Jan 26 16:41:04 crc kubenswrapper[4680]: I0126 16:41:04.954983 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" podStartSLOduration=2.5406027350000002 podStartE2EDuration="2.954965057s" podCreationTimestamp="2026-01-26 16:41:02 +0000 UTC" firstStartedPulling="2026-01-26 16:41:03.808501305 +0000 UTC m=+2138.969773574" lastFinishedPulling="2026-01-26 16:41:04.222863627 +0000 UTC m=+2139.384135896" observedRunningTime="2026-01-26 16:41:04.951558041 +0000 UTC m=+2140.112830310" watchObservedRunningTime="2026-01-26 16:41:04.954965057 +0000 UTC m=+2140.116237326" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.100229 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kptbq"] Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.103154 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.111648 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kptbq"] Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.154387 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htfp4\" (UniqueName: \"kubernetes.io/projected/7ee7a760-55a1-450f-84db-70615e776144-kube-api-access-htfp4\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.154801 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-catalog-content\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.154830 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-utilities\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.257427 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htfp4\" (UniqueName: \"kubernetes.io/projected/7ee7a760-55a1-450f-84db-70615e776144-kube-api-access-htfp4\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.257493 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-catalog-content\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.257514 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-utilities\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.257905 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-utilities\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.258120 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-catalog-content\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.280370 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htfp4\" (UniqueName: \"kubernetes.io/projected/7ee7a760-55a1-450f-84db-70615e776144-kube-api-access-htfp4\") pod \"certified-operators-kptbq\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.437119 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:10 crc kubenswrapper[4680]: I0126 16:41:10.958010 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kptbq"] Jan 26 16:41:11 crc kubenswrapper[4680]: I0126 16:41:10.999791 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerStarted","Data":"c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104"} Jan 26 16:41:12 crc kubenswrapper[4680]: I0126 16:41:12.011580 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ee7a760-55a1-450f-84db-70615e776144" containerID="881d7b0dbc7ddd9cbbe2b3a91faaa25c5f5227cb3828309708808ad25d8d8e4c" exitCode=0 Jan 26 16:41:12 crc kubenswrapper[4680]: I0126 16:41:12.011946 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerDied","Data":"881d7b0dbc7ddd9cbbe2b3a91faaa25c5f5227cb3828309708808ad25d8d8e4c"} Jan 26 16:41:13 crc kubenswrapper[4680]: I0126 16:41:13.021298 4680 generic.go:334] "Generic (PLEG): container finished" podID="2039e5e4-eaf0-444e-a3e0-210b72b40483" containerID="4600c44e354c1490a5fa5b8ec30f9078c5d721f4d0f403530cd80b49ddd9374f" exitCode=0 Jan 26 16:41:13 crc kubenswrapper[4680]: I0126 16:41:13.021718 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" event={"ID":"2039e5e4-eaf0-444e-a3e0-210b72b40483","Type":"ContainerDied","Data":"4600c44e354c1490a5fa5b8ec30f9078c5d721f4d0f403530cd80b49ddd9374f"} Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.030811 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerStarted","Data":"f982d086186b1aa68e7bc2d08ea20356bc5e0c66652aab312150bbb4b5235c99"} Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.477976 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.559806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkgbg\" (UniqueName: \"kubernetes.io/projected/2039e5e4-eaf0-444e-a3e0-210b72b40483-kube-api-access-kkgbg\") pod \"2039e5e4-eaf0-444e-a3e0-210b72b40483\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.560118 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-inventory-0\") pod \"2039e5e4-eaf0-444e-a3e0-210b72b40483\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.560440 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-ssh-key-openstack-edpm-ipam\") pod \"2039e5e4-eaf0-444e-a3e0-210b72b40483\" (UID: \"2039e5e4-eaf0-444e-a3e0-210b72b40483\") " Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.568451 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2039e5e4-eaf0-444e-a3e0-210b72b40483-kube-api-access-kkgbg" (OuterVolumeSpecName: "kube-api-access-kkgbg") pod "2039e5e4-eaf0-444e-a3e0-210b72b40483" (UID: "2039e5e4-eaf0-444e-a3e0-210b72b40483"). InnerVolumeSpecName "kube-api-access-kkgbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.586609 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2039e5e4-eaf0-444e-a3e0-210b72b40483" (UID: "2039e5e4-eaf0-444e-a3e0-210b72b40483"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.614498 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2039e5e4-eaf0-444e-a3e0-210b72b40483" (UID: "2039e5e4-eaf0-444e-a3e0-210b72b40483"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.662851 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.662890 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkgbg\" (UniqueName: \"kubernetes.io/projected/2039e5e4-eaf0-444e-a3e0-210b72b40483-kube-api-access-kkgbg\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:14 crc kubenswrapper[4680]: I0126 16:41:14.662918 4680 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2039e5e4-eaf0-444e-a3e0-210b72b40483-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.043141 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ee7a760-55a1-450f-84db-70615e776144" containerID="f982d086186b1aa68e7bc2d08ea20356bc5e0c66652aab312150bbb4b5235c99" exitCode=0 Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.043169 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerDied","Data":"f982d086186b1aa68e7bc2d08ea20356bc5e0c66652aab312150bbb4b5235c99"} Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.045183 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" event={"ID":"2039e5e4-eaf0-444e-a3e0-210b72b40483","Type":"ContainerDied","Data":"b9acfbd58437580fa5ccc0ff7d3bc14745fb41b925487a57f76a0f92d28b1a59"} Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.045223 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9acfbd58437580fa5ccc0ff7d3bc14745fb41b925487a57f76a0f92d28b1a59" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.045354 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-llf4l" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.157198 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf"] Jan 26 16:41:15 crc kubenswrapper[4680]: E0126 16:41:15.157873 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2039e5e4-eaf0-444e-a3e0-210b72b40483" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.157905 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2039e5e4-eaf0-444e-a3e0-210b72b40483" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.158243 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2039e5e4-eaf0-444e-a3e0-210b72b40483" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.159406 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.163427 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.166210 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.171610 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.185662 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.203225 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf"] Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.275403 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.276042 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.276165 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mlt\" (UniqueName: \"kubernetes.io/projected/775a6a3c-c24b-4573-ab5b-8ec072995d39-kube-api-access-86mlt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.379177 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.379309 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mlt\" (UniqueName: \"kubernetes.io/projected/775a6a3c-c24b-4573-ab5b-8ec072995d39-kube-api-access-86mlt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.379370 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.387033 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.390967 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.398976 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mlt\" (UniqueName: \"kubernetes.io/projected/775a6a3c-c24b-4573-ab5b-8ec072995d39-kube-api-access-86mlt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p4tnf\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:15 crc kubenswrapper[4680]: I0126 16:41:15.499363 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:16 crc kubenswrapper[4680]: I0126 16:41:16.054894 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerStarted","Data":"385f086bc341f6c47a1041d64161f559acc3bd4e04f468b9f9854deec29bd840"} Jan 26 16:41:16 crc kubenswrapper[4680]: I0126 16:41:16.077348 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kptbq" podStartSLOduration=2.655052273 podStartE2EDuration="6.077331917s" podCreationTimestamp="2026-01-26 16:41:10 +0000 UTC" firstStartedPulling="2026-01-26 16:41:12.01291013 +0000 UTC m=+2147.174182399" lastFinishedPulling="2026-01-26 16:41:15.435189774 +0000 UTC m=+2150.596462043" observedRunningTime="2026-01-26 16:41:16.073036926 +0000 UTC m=+2151.234309195" watchObservedRunningTime="2026-01-26 16:41:16.077331917 +0000 UTC m=+2151.238604186" Jan 26 16:41:16 crc kubenswrapper[4680]: I0126 16:41:16.135150 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf"] Jan 26 16:41:16 crc kubenswrapper[4680]: I0126 16:41:16.980451 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:41:16 crc kubenswrapper[4680]: I0126 16:41:16.980501 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:41:17 crc kubenswrapper[4680]: I0126 16:41:17.070021 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" event={"ID":"775a6a3c-c24b-4573-ab5b-8ec072995d39","Type":"ContainerStarted","Data":"124b5fc92cd5f7a4813e30ec223d770d63eff15e7b93a0cc0df0ead74150d916"} Jan 26 16:41:19 crc kubenswrapper[4680]: I0126 16:41:19.108406 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" event={"ID":"775a6a3c-c24b-4573-ab5b-8ec072995d39","Type":"ContainerStarted","Data":"f0068fbb9b208afd4d721d10810120d44e1d13c91da3281134dca301e3aafb28"} Jan 26 16:41:19 crc kubenswrapper[4680]: I0126 16:41:19.123520 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" podStartSLOduration=2.216333069 podStartE2EDuration="4.123495482s" podCreationTimestamp="2026-01-26 16:41:15 +0000 UTC" firstStartedPulling="2026-01-26 16:41:16.119995236 +0000 UTC m=+2151.281267505" lastFinishedPulling="2026-01-26 16:41:18.027157639 +0000 UTC m=+2153.188429918" observedRunningTime="2026-01-26 16:41:19.122980397 +0000 UTC m=+2154.284252666" watchObservedRunningTime="2026-01-26 16:41:19.123495482 +0000 UTC m=+2154.284767751" Jan 26 16:41:20 crc kubenswrapper[4680]: I0126 16:41:20.437980 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:20 crc kubenswrapper[4680]: I0126 16:41:20.438302 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:20 crc kubenswrapper[4680]: I0126 16:41:20.485733 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:21 crc kubenswrapper[4680]: I0126 16:41:21.191343 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:21 crc kubenswrapper[4680]: I0126 16:41:21.246677 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kptbq"] Jan 26 16:41:23 crc kubenswrapper[4680]: I0126 16:41:23.159023 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kptbq" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="registry-server" containerID="cri-o://385f086bc341f6c47a1041d64161f559acc3bd4e04f468b9f9854deec29bd840" gracePeriod=2 Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.169777 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ee7a760-55a1-450f-84db-70615e776144" containerID="385f086bc341f6c47a1041d64161f559acc3bd4e04f468b9f9854deec29bd840" exitCode=0 Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.169835 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerDied","Data":"385f086bc341f6c47a1041d64161f559acc3bd4e04f468b9f9854deec29bd840"} Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.170338 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kptbq" event={"ID":"7ee7a760-55a1-450f-84db-70615e776144","Type":"ContainerDied","Data":"c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104"} Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.170359 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.194514 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.254893 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfp4\" (UniqueName: \"kubernetes.io/projected/7ee7a760-55a1-450f-84db-70615e776144-kube-api-access-htfp4\") pod \"7ee7a760-55a1-450f-84db-70615e776144\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.255157 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-catalog-content\") pod \"7ee7a760-55a1-450f-84db-70615e776144\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.255204 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-utilities\") pod \"7ee7a760-55a1-450f-84db-70615e776144\" (UID: \"7ee7a760-55a1-450f-84db-70615e776144\") " Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.256038 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-utilities" (OuterVolumeSpecName: "utilities") pod "7ee7a760-55a1-450f-84db-70615e776144" (UID: "7ee7a760-55a1-450f-84db-70615e776144"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.269358 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee7a760-55a1-450f-84db-70615e776144-kube-api-access-htfp4" (OuterVolumeSpecName: "kube-api-access-htfp4") pod "7ee7a760-55a1-450f-84db-70615e776144" (UID: "7ee7a760-55a1-450f-84db-70615e776144"). InnerVolumeSpecName "kube-api-access-htfp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.303131 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ee7a760-55a1-450f-84db-70615e776144" (UID: "7ee7a760-55a1-450f-84db-70615e776144"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.357410 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.357449 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee7a760-55a1-450f-84db-70615e776144-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:24 crc kubenswrapper[4680]: I0126 16:41:24.357464 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfp4\" (UniqueName: \"kubernetes.io/projected/7ee7a760-55a1-450f-84db-70615e776144-kube-api-access-htfp4\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:25 crc kubenswrapper[4680]: I0126 16:41:25.177524 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kptbq" Jan 26 16:41:25 crc kubenswrapper[4680]: I0126 16:41:25.228914 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kptbq"] Jan 26 16:41:25 crc kubenswrapper[4680]: I0126 16:41:25.243475 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kptbq"] Jan 26 16:41:27 crc kubenswrapper[4680]: I0126 16:41:27.179284 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee7a760-55a1-450f-84db-70615e776144" path="/var/lib/kubelet/pods/7ee7a760-55a1-450f-84db-70615e776144/volumes" Jan 26 16:41:27 crc kubenswrapper[4680]: I0126 16:41:27.196060 4680 generic.go:334] "Generic (PLEG): container finished" podID="775a6a3c-c24b-4573-ab5b-8ec072995d39" containerID="f0068fbb9b208afd4d721d10810120d44e1d13c91da3281134dca301e3aafb28" exitCode=0 Jan 26 16:41:27 crc kubenswrapper[4680]: I0126 16:41:27.196195 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" event={"ID":"775a6a3c-c24b-4573-ab5b-8ec072995d39","Type":"ContainerDied","Data":"f0068fbb9b208afd4d721d10810120d44e1d13c91da3281134dca301e3aafb28"} Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.631998 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.739400 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-ssh-key-openstack-edpm-ipam\") pod \"775a6a3c-c24b-4573-ab5b-8ec072995d39\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.739815 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-inventory\") pod \"775a6a3c-c24b-4573-ab5b-8ec072995d39\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.740104 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86mlt\" (UniqueName: \"kubernetes.io/projected/775a6a3c-c24b-4573-ab5b-8ec072995d39-kube-api-access-86mlt\") pod \"775a6a3c-c24b-4573-ab5b-8ec072995d39\" (UID: \"775a6a3c-c24b-4573-ab5b-8ec072995d39\") " Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.749526 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775a6a3c-c24b-4573-ab5b-8ec072995d39-kube-api-access-86mlt" (OuterVolumeSpecName: "kube-api-access-86mlt") pod "775a6a3c-c24b-4573-ab5b-8ec072995d39" (UID: "775a6a3c-c24b-4573-ab5b-8ec072995d39"). InnerVolumeSpecName "kube-api-access-86mlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.771720 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-inventory" (OuterVolumeSpecName: "inventory") pod "775a6a3c-c24b-4573-ab5b-8ec072995d39" (UID: "775a6a3c-c24b-4573-ab5b-8ec072995d39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.782203 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "775a6a3c-c24b-4573-ab5b-8ec072995d39" (UID: "775a6a3c-c24b-4573-ab5b-8ec072995d39"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.846360 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86mlt\" (UniqueName: \"kubernetes.io/projected/775a6a3c-c24b-4573-ab5b-8ec072995d39-kube-api-access-86mlt\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.846677 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:28 crc kubenswrapper[4680]: I0126 16:41:28.846689 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/775a6a3c-c24b-4573-ab5b-8ec072995d39-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.213185 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" event={"ID":"775a6a3c-c24b-4573-ab5b-8ec072995d39","Type":"ContainerDied","Data":"124b5fc92cd5f7a4813e30ec223d770d63eff15e7b93a0cc0df0ead74150d916"} Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.213395 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124b5fc92cd5f7a4813e30ec223d770d63eff15e7b93a0cc0df0ead74150d916" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.213244 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p4tnf" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.298333 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw"] Jan 26 16:41:29 crc kubenswrapper[4680]: E0126 16:41:29.299022 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775a6a3c-c24b-4573-ab5b-8ec072995d39" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.299147 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="775a6a3c-c24b-4573-ab5b-8ec072995d39" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:29 crc kubenswrapper[4680]: E0126 16:41:29.299262 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="extract-content" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.299376 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="extract-content" Jan 26 16:41:29 crc kubenswrapper[4680]: E0126 16:41:29.299484 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="extract-utilities" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.299574 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="extract-utilities" Jan 26 16:41:29 crc kubenswrapper[4680]: E0126 16:41:29.299665 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="registry-server" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.299734 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="registry-server" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.300031 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="775a6a3c-c24b-4573-ab5b-8ec072995d39" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.300255 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee7a760-55a1-450f-84db-70615e776144" containerName="registry-server" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.301133 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.303841 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.304089 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.304214 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.305437 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.308998 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw"] Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.456512 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.456583 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.456640 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmq5\" (UniqueName: \"kubernetes.io/projected/24cabf70-e0c0-4346-8381-08ed35fe9ad4-kube-api-access-cmmq5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.558332 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmq5\" (UniqueName: \"kubernetes.io/projected/24cabf70-e0c0-4346-8381-08ed35fe9ad4-kube-api-access-cmmq5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.558829 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.559461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.569938 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.570329 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.577203 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmq5\" (UniqueName: \"kubernetes.io/projected/24cabf70-e0c0-4346-8381-08ed35fe9ad4-kube-api-access-cmmq5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: I0126 16:41:29.615669 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:29 crc kubenswrapper[4680]: E0126 16:41:29.626829 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice/crio-c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:41:30 crc kubenswrapper[4680]: I0126 16:41:30.158913 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw"] Jan 26 16:41:30 crc kubenswrapper[4680]: W0126 16:41:30.163338 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24cabf70_e0c0_4346_8381_08ed35fe9ad4.slice/crio-d8e9b513cc58cd7bb572fa42d3764d1144180b3b2f88c244f69f7ccbc9d8404e WatchSource:0}: Error finding container d8e9b513cc58cd7bb572fa42d3764d1144180b3b2f88c244f69f7ccbc9d8404e: Status 404 returned error can't find the container with id d8e9b513cc58cd7bb572fa42d3764d1144180b3b2f88c244f69f7ccbc9d8404e Jan 26 16:41:30 crc kubenswrapper[4680]: I0126 16:41:30.223105 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" event={"ID":"24cabf70-e0c0-4346-8381-08ed35fe9ad4","Type":"ContainerStarted","Data":"d8e9b513cc58cd7bb572fa42d3764d1144180b3b2f88c244f69f7ccbc9d8404e"} Jan 26 16:41:31 crc kubenswrapper[4680]: I0126 16:41:31.239170 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" event={"ID":"24cabf70-e0c0-4346-8381-08ed35fe9ad4","Type":"ContainerStarted","Data":"769cc0156fc333dcf43e20d2376432e6bb341df6576bc9f780f2eede0fc22bf9"} Jan 26 16:41:31 crc kubenswrapper[4680]: I0126 16:41:31.266337 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" podStartSLOduration=1.8507383179999999 podStartE2EDuration="2.266318116s" podCreationTimestamp="2026-01-26 16:41:29 +0000 UTC" firstStartedPulling="2026-01-26 16:41:30.165581757 +0000 UTC m=+2165.326854026" lastFinishedPulling="2026-01-26 16:41:30.581161555 +0000 UTC m=+2165.742433824" observedRunningTime="2026-01-26 16:41:31.264519205 +0000 UTC m=+2166.425791474" watchObservedRunningTime="2026-01-26 16:41:31.266318116 +0000 UTC m=+2166.427590385" Jan 26 16:41:39 crc kubenswrapper[4680]: E0126 16:41:39.872382 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice/crio-c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104\": RecentStats: unable to find data in memory cache]" Jan 26 16:41:41 crc kubenswrapper[4680]: I0126 16:41:41.324412 4680 generic.go:334] "Generic (PLEG): container finished" podID="24cabf70-e0c0-4346-8381-08ed35fe9ad4" containerID="769cc0156fc333dcf43e20d2376432e6bb341df6576bc9f780f2eede0fc22bf9" exitCode=0 Jan 26 16:41:41 crc kubenswrapper[4680]: I0126 16:41:41.324458 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" event={"ID":"24cabf70-e0c0-4346-8381-08ed35fe9ad4","Type":"ContainerDied","Data":"769cc0156fc333dcf43e20d2376432e6bb341df6576bc9f780f2eede0fc22bf9"} Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.754189 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.904731 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-inventory\") pod \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.905352 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-ssh-key-openstack-edpm-ipam\") pod \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.905476 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmmq5\" (UniqueName: \"kubernetes.io/projected/24cabf70-e0c0-4346-8381-08ed35fe9ad4-kube-api-access-cmmq5\") pod \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\" (UID: \"24cabf70-e0c0-4346-8381-08ed35fe9ad4\") " Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.909628 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cabf70-e0c0-4346-8381-08ed35fe9ad4-kube-api-access-cmmq5" (OuterVolumeSpecName: "kube-api-access-cmmq5") pod "24cabf70-e0c0-4346-8381-08ed35fe9ad4" (UID: "24cabf70-e0c0-4346-8381-08ed35fe9ad4"). InnerVolumeSpecName "kube-api-access-cmmq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.936326 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-inventory" (OuterVolumeSpecName: "inventory") pod "24cabf70-e0c0-4346-8381-08ed35fe9ad4" (UID: "24cabf70-e0c0-4346-8381-08ed35fe9ad4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:42 crc kubenswrapper[4680]: I0126 16:41:42.954904 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24cabf70-e0c0-4346-8381-08ed35fe9ad4" (UID: "24cabf70-e0c0-4346-8381-08ed35fe9ad4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.009297 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.009335 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24cabf70-e0c0-4346-8381-08ed35fe9ad4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.009352 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmmq5\" (UniqueName: \"kubernetes.io/projected/24cabf70-e0c0-4346-8381-08ed35fe9ad4-kube-api-access-cmmq5\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.341224 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" event={"ID":"24cabf70-e0c0-4346-8381-08ed35fe9ad4","Type":"ContainerDied","Data":"d8e9b513cc58cd7bb572fa42d3764d1144180b3b2f88c244f69f7ccbc9d8404e"} Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.341266 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e9b513cc58cd7bb572fa42d3764d1144180b3b2f88c244f69f7ccbc9d8404e" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.341302 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqqcw" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.432355 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4"] Jan 26 16:41:43 crc kubenswrapper[4680]: E0126 16:41:43.432821 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24cabf70-e0c0-4346-8381-08ed35fe9ad4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.432843 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="24cabf70-e0c0-4346-8381-08ed35fe9ad4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.433049 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="24cabf70-e0c0-4346-8381-08ed35fe9ad4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.433823 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.439120 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.440225 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.440228 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.440329 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.440436 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.440497 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.440829 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.442197 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.456415 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4"] Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618701 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qln7w\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-kube-api-access-qln7w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618762 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618844 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618867 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618894 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618919 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.618958 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.619001 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.619032 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.619064 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.619109 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.619140 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.619157 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720640 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720694 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720780 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720819 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720846 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720933 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720954 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.720991 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qln7w\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-kube-api-access-qln7w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.721042 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.721095 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.721136 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.721166 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.726407 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.727383 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.727614 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.728538 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.728653 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.728923 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.730246 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.731166 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.732390 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.735596 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.739455 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.742733 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qln7w\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-kube-api-access-qln7w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.743172 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.744209 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:43 crc kubenswrapper[4680]: I0126 16:41:43.750724 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:41:44 crc kubenswrapper[4680]: W0126 16:41:44.126837 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f34a7a1_f519_476a_b9fd_4247c2af5838.slice/crio-0fd60ba9f3ff9a6cd076002b91b6a029b295b8808e6f4630d684deeeb3a816bc WatchSource:0}: Error finding container 0fd60ba9f3ff9a6cd076002b91b6a029b295b8808e6f4630d684deeeb3a816bc: Status 404 returned error can't find the container with id 0fd60ba9f3ff9a6cd076002b91b6a029b295b8808e6f4630d684deeeb3a816bc Jan 26 16:41:44 crc kubenswrapper[4680]: I0126 16:41:44.130365 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4"] Jan 26 16:41:44 crc kubenswrapper[4680]: I0126 16:41:44.350540 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" event={"ID":"9f34a7a1-f519-476a-b9fd-4247c2af5838","Type":"ContainerStarted","Data":"0fd60ba9f3ff9a6cd076002b91b6a029b295b8808e6f4630d684deeeb3a816bc"} Jan 26 16:41:45 crc kubenswrapper[4680]: I0126 16:41:45.360702 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" event={"ID":"9f34a7a1-f519-476a-b9fd-4247c2af5838","Type":"ContainerStarted","Data":"44c43f700fb5bee89f3135169335db6a0fcc9903d1fc9de5771a2c343631e778"} Jan 26 16:41:46 crc kubenswrapper[4680]: I0126 16:41:46.980688 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:41:46 crc kubenswrapper[4680]: I0126 16:41:46.981022 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:41:46 crc kubenswrapper[4680]: I0126 16:41:46.981115 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:41:46 crc kubenswrapper[4680]: I0126 16:41:46.981773 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d5778ad3d975241861671ada2343061d63ca99f24cc8b62af57c10230c757bf"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:41:46 crc kubenswrapper[4680]: I0126 16:41:46.981815 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://2d5778ad3d975241861671ada2343061d63ca99f24cc8b62af57c10230c757bf" gracePeriod=600 Jan 26 16:41:47 crc kubenswrapper[4680]: I0126 16:41:47.379375 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="2d5778ad3d975241861671ada2343061d63ca99f24cc8b62af57c10230c757bf" exitCode=0 Jan 26 16:41:47 crc kubenswrapper[4680]: I0126 16:41:47.379584 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"2d5778ad3d975241861671ada2343061d63ca99f24cc8b62af57c10230c757bf"} Jan 26 16:41:47 crc kubenswrapper[4680]: I0126 16:41:47.379742 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413"} Jan 26 16:41:47 crc kubenswrapper[4680]: I0126 16:41:47.379765 4680 scope.go:117] "RemoveContainer" containerID="19bfdaeb95bd75441a6658bc45952f943396e2344091ba8521951e409975a5f7" Jan 26 16:41:47 crc kubenswrapper[4680]: I0126 16:41:47.410528 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" podStartSLOduration=3.706754093 podStartE2EDuration="4.41048395s" podCreationTimestamp="2026-01-26 16:41:43 +0000 UTC" firstStartedPulling="2026-01-26 16:41:44.128724314 +0000 UTC m=+2179.289996583" lastFinishedPulling="2026-01-26 16:41:44.832454171 +0000 UTC m=+2179.993726440" observedRunningTime="2026-01-26 16:41:45.381434116 +0000 UTC m=+2180.542706385" watchObservedRunningTime="2026-01-26 16:41:47.41048395 +0000 UTC m=+2182.571756219" Jan 26 16:41:50 crc kubenswrapper[4680]: E0126 16:41:50.132167 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice/crio-c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104\": RecentStats: unable to find data in memory cache]" Jan 26 16:41:58 crc kubenswrapper[4680]: I0126 16:41:58.998334 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8dj"] Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.002128 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.004211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-catalog-content\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.004403 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-utilities\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.004465 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7rq\" (UniqueName: \"kubernetes.io/projected/ef2f83b3-fced-44db-99f0-2ce69c86065e-kube-api-access-8r7rq\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.008955 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8dj"] Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.106712 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-utilities\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.106830 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7rq\" (UniqueName: \"kubernetes.io/projected/ef2f83b3-fced-44db-99f0-2ce69c86065e-kube-api-access-8r7rq\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.107204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-utilities\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.107337 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-catalog-content\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.107846 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-catalog-content\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.125952 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7rq\" (UniqueName: \"kubernetes.io/projected/ef2f83b3-fced-44db-99f0-2ce69c86065e-kube-api-access-8r7rq\") pod \"redhat-marketplace-vs8dj\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.326917 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:41:59 crc kubenswrapper[4680]: I0126 16:41:59.838558 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8dj"] Jan 26 16:42:00 crc kubenswrapper[4680]: E0126 16:42:00.378768 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice/crio-c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:42:00 crc kubenswrapper[4680]: I0126 16:42:00.499815 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerID="803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263" exitCode=0 Jan 26 16:42:00 crc kubenswrapper[4680]: I0126 16:42:00.499856 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerDied","Data":"803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263"} Jan 26 16:42:00 crc kubenswrapper[4680]: I0126 16:42:00.499884 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerStarted","Data":"770e7746f07c03fd7d577aeb2bea7315085880db76aa8d362e20f67fe7f9070e"} Jan 26 16:42:01 crc kubenswrapper[4680]: I0126 16:42:01.509601 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerStarted","Data":"58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9"} Jan 26 16:42:02 crc kubenswrapper[4680]: I0126 16:42:02.521087 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerID="58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9" exitCode=0 Jan 26 16:42:02 crc kubenswrapper[4680]: I0126 16:42:02.521129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerDied","Data":"58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9"} Jan 26 16:42:03 crc kubenswrapper[4680]: I0126 16:42:03.530549 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerStarted","Data":"94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15"} Jan 26 16:42:03 crc kubenswrapper[4680]: I0126 16:42:03.555992 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vs8dj" podStartSLOduration=3.138920072 podStartE2EDuration="5.555977583s" podCreationTimestamp="2026-01-26 16:41:58 +0000 UTC" firstStartedPulling="2026-01-26 16:42:00.502753138 +0000 UTC m=+2195.664025407" lastFinishedPulling="2026-01-26 16:42:02.919810649 +0000 UTC m=+2198.081082918" observedRunningTime="2026-01-26 16:42:03.546380111 +0000 UTC m=+2198.707652390" watchObservedRunningTime="2026-01-26 16:42:03.555977583 +0000 UTC m=+2198.717249852" Jan 26 16:42:09 crc kubenswrapper[4680]: I0126 16:42:09.328272 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:42:09 crc kubenswrapper[4680]: I0126 16:42:09.328789 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:42:09 crc kubenswrapper[4680]: I0126 16:42:09.382588 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:42:09 crc kubenswrapper[4680]: I0126 16:42:09.624102 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:42:09 crc kubenswrapper[4680]: I0126 16:42:09.670400 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8dj"] Jan 26 16:42:10 crc kubenswrapper[4680]: E0126 16:42:10.618401 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice/crio-c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:42:11 crc kubenswrapper[4680]: I0126 16:42:11.594858 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vs8dj" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="registry-server" containerID="cri-o://94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15" gracePeriod=2 Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.389739 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.416805 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r7rq\" (UniqueName: \"kubernetes.io/projected/ef2f83b3-fced-44db-99f0-2ce69c86065e-kube-api-access-8r7rq\") pod \"ef2f83b3-fced-44db-99f0-2ce69c86065e\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.416909 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-utilities\") pod \"ef2f83b3-fced-44db-99f0-2ce69c86065e\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.416943 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-catalog-content\") pod \"ef2f83b3-fced-44db-99f0-2ce69c86065e\" (UID: \"ef2f83b3-fced-44db-99f0-2ce69c86065e\") " Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.418669 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-utilities" (OuterVolumeSpecName: "utilities") pod "ef2f83b3-fced-44db-99f0-2ce69c86065e" (UID: "ef2f83b3-fced-44db-99f0-2ce69c86065e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.420652 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.436273 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2f83b3-fced-44db-99f0-2ce69c86065e-kube-api-access-8r7rq" (OuterVolumeSpecName: "kube-api-access-8r7rq") pod "ef2f83b3-fced-44db-99f0-2ce69c86065e" (UID: "ef2f83b3-fced-44db-99f0-2ce69c86065e"). InnerVolumeSpecName "kube-api-access-8r7rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.442509 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef2f83b3-fced-44db-99f0-2ce69c86065e" (UID: "ef2f83b3-fced-44db-99f0-2ce69c86065e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.522739 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2f83b3-fced-44db-99f0-2ce69c86065e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.522769 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r7rq\" (UniqueName: \"kubernetes.io/projected/ef2f83b3-fced-44db-99f0-2ce69c86065e-kube-api-access-8r7rq\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.620953 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerID="94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15" exitCode=0 Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.621002 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8dj" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.621032 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerDied","Data":"94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15"} Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.621093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8dj" event={"ID":"ef2f83b3-fced-44db-99f0-2ce69c86065e","Type":"ContainerDied","Data":"770e7746f07c03fd7d577aeb2bea7315085880db76aa8d362e20f67fe7f9070e"} Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.621117 4680 scope.go:117] "RemoveContainer" containerID="94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.657331 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8dj"] Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.665480 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8dj"] Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.668638 4680 scope.go:117] "RemoveContainer" containerID="58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.690828 4680 scope.go:117] "RemoveContainer" containerID="803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.741389 4680 scope.go:117] "RemoveContainer" containerID="94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15" Jan 26 16:42:12 crc kubenswrapper[4680]: E0126 16:42:12.741811 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15\": container with ID starting with 94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15 not found: ID does not exist" containerID="94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.741852 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15"} err="failed to get container status \"94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15\": rpc error: code = NotFound desc = could not find container \"94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15\": container with ID starting with 94f30e208daa3702823bc2957289bcd9f9dbc9ec9ce0ebe2cbd0a0c1374a5f15 not found: ID does not exist" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.741879 4680 scope.go:117] "RemoveContainer" containerID="58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9" Jan 26 16:42:12 crc kubenswrapper[4680]: E0126 16:42:12.742263 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9\": container with ID starting with 58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9 not found: ID does not exist" containerID="58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.742283 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9"} err="failed to get container status \"58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9\": rpc error: code = NotFound desc = could not find container \"58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9\": container with ID starting with 58183cc70b891014f7381b09376d85ccdb7bfdd354b4a4ed1eb57e11bc8ef1a9 not found: ID does not exist" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.742296 4680 scope.go:117] "RemoveContainer" containerID="803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263" Jan 26 16:42:12 crc kubenswrapper[4680]: E0126 16:42:12.742623 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263\": container with ID starting with 803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263 not found: ID does not exist" containerID="803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263" Jan 26 16:42:12 crc kubenswrapper[4680]: I0126 16:42:12.742665 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263"} err="failed to get container status \"803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263\": rpc error: code = NotFound desc = could not find container \"803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263\": container with ID starting with 803713ea3307560e701d9657fcde183565612a068fc0532599ef592daf3dd263 not found: ID does not exist" Jan 26 16:42:13 crc kubenswrapper[4680]: I0126 16:42:13.179953 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" path="/var/lib/kubelet/pods/ef2f83b3-fced-44db-99f0-2ce69c86065e/volumes" Jan 26 16:42:20 crc kubenswrapper[4680]: E0126 16:42:20.853551 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee7a760_55a1_450f_84db_70615e776144.slice/crio-c9b41d39b4771eef3440f9118a83c46f8b02c5b3e3cc3482cce05833a975e104\": RecentStats: unable to find data in memory cache]" Jan 26 16:42:27 crc kubenswrapper[4680]: I0126 16:42:27.751267 4680 generic.go:334] "Generic (PLEG): container finished" podID="9f34a7a1-f519-476a-b9fd-4247c2af5838" containerID="44c43f700fb5bee89f3135169335db6a0fcc9903d1fc9de5771a2c343631e778" exitCode=0 Jan 26 16:42:27 crc kubenswrapper[4680]: I0126 16:42:27.751345 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" event={"ID":"9f34a7a1-f519-476a-b9fd-4247c2af5838","Type":"ContainerDied","Data":"44c43f700fb5bee89f3135169335db6a0fcc9903d1fc9de5771a2c343631e778"} Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.226626 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380170 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-repo-setup-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380576 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-neutron-metadata-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380667 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qln7w\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-kube-api-access-qln7w\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380697 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380729 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-libvirt-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380823 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-bootstrap-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380856 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-inventory\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380930 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-nova-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.380962 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ovn-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.381001 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-telemetry-combined-ca-bundle\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.381031 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ssh-key-openstack-edpm-ipam\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.381085 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.381125 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.381156 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9f34a7a1-f519-476a-b9fd-4247c2af5838\" (UID: \"9f34a7a1-f519-476a-b9fd-4247c2af5838\") " Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.387370 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.387789 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-kube-api-access-qln7w" (OuterVolumeSpecName: "kube-api-access-qln7w") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "kube-api-access-qln7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.388640 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.388861 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.388963 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.389224 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.390342 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.392289 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.392511 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.392749 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.402251 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.403892 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.421500 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-inventory" (OuterVolumeSpecName: "inventory") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.422694 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f34a7a1-f519-476a-b9fd-4247c2af5838" (UID: "9f34a7a1-f519-476a-b9fd-4247c2af5838"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483144 4680 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483196 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483210 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483224 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483240 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483253 4680 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483263 4680 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483272 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qln7w\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-kube-api-access-qln7w\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483281 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9f34a7a1-f519-476a-b9fd-4247c2af5838-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483289 4680 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483298 4680 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483306 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483314 4680 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.483325 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f34a7a1-f519-476a-b9fd-4247c2af5838-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.771321 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" event={"ID":"9f34a7a1-f519-476a-b9fd-4247c2af5838","Type":"ContainerDied","Data":"0fd60ba9f3ff9a6cd076002b91b6a029b295b8808e6f4630d684deeeb3a816bc"} Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.771369 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd60ba9f3ff9a6cd076002b91b6a029b295b8808e6f4630d684deeeb3a816bc" Jan 26 16:42:29 crc kubenswrapper[4680]: I0126 16:42:29.771375 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6l2c4" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.099550 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf"] Jan 26 16:42:30 crc kubenswrapper[4680]: E0126 16:42:30.099899 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="extract-content" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.099916 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="extract-content" Jan 26 16:42:30 crc kubenswrapper[4680]: E0126 16:42:30.099926 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f34a7a1-f519-476a-b9fd-4247c2af5838" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.099933 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f34a7a1-f519-476a-b9fd-4247c2af5838" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:42:30 crc kubenswrapper[4680]: E0126 16:42:30.099945 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="extract-utilities" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.099951 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="extract-utilities" Jan 26 16:42:30 crc kubenswrapper[4680]: E0126 16:42:30.099975 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="registry-server" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.099983 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="registry-server" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.100175 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f34a7a1-f519-476a-b9fd-4247c2af5838" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.100207 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2f83b3-fced-44db-99f0-2ce69c86065e" containerName="registry-server" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.100782 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.103819 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.104340 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.104366 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.104485 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.104564 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.121878 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf"] Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.198273 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.198466 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.198511 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.198534 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sch85\" (UniqueName: \"kubernetes.io/projected/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-kube-api-access-sch85\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.198562 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.300368 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.300455 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.300495 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sch85\" (UniqueName: \"kubernetes.io/projected/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-kube-api-access-sch85\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.300528 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.300601 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.302171 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.305896 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.305964 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.307579 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.325383 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sch85\" (UniqueName: \"kubernetes.io/projected/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-kube-api-access-sch85\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sctjf\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.432582 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:42:30 crc kubenswrapper[4680]: I0126 16:42:30.964340 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf"] Jan 26 16:42:31 crc kubenswrapper[4680]: I0126 16:42:31.795653 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" event={"ID":"8c2ab31d-7ded-43cc-8801-9ac16d4419b2","Type":"ContainerStarted","Data":"e6ecf4463b755657038f3f7d013c3f27c5363f20c6e141357f882b64dedffa73"} Jan 26 16:42:32 crc kubenswrapper[4680]: I0126 16:42:32.805701 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" event={"ID":"8c2ab31d-7ded-43cc-8801-9ac16d4419b2","Type":"ContainerStarted","Data":"6a451434a0880e758365d51eb1726a7071afa9af463422a7cf3a8e72c41f27a0"} Jan 26 16:42:32 crc kubenswrapper[4680]: I0126 16:42:32.828219 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" podStartSLOduration=3.112936897 podStartE2EDuration="3.82820326s" podCreationTimestamp="2026-01-26 16:42:29 +0000 UTC" firstStartedPulling="2026-01-26 16:42:31.002209485 +0000 UTC m=+2226.163481754" lastFinishedPulling="2026-01-26 16:42:31.717475848 +0000 UTC m=+2226.878748117" observedRunningTime="2026-01-26 16:42:32.82149047 +0000 UTC m=+2227.982762739" watchObservedRunningTime="2026-01-26 16:42:32.82820326 +0000 UTC m=+2227.989475539" Jan 26 16:43:45 crc kubenswrapper[4680]: I0126 16:43:45.023590 4680 generic.go:334] "Generic (PLEG): container finished" podID="8c2ab31d-7ded-43cc-8801-9ac16d4419b2" containerID="6a451434a0880e758365d51eb1726a7071afa9af463422a7cf3a8e72c41f27a0" exitCode=0 Jan 26 16:43:45 crc kubenswrapper[4680]: I0126 16:43:45.023679 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" event={"ID":"8c2ab31d-7ded-43cc-8801-9ac16d4419b2","Type":"ContainerDied","Data":"6a451434a0880e758365d51eb1726a7071afa9af463422a7cf3a8e72c41f27a0"} Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.405835 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.579791 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovn-combined-ca-bundle\") pod \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.579864 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ssh-key-openstack-edpm-ipam\") pod \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.580049 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovncontroller-config-0\") pod \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.580120 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-inventory\") pod \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.580192 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sch85\" (UniqueName: \"kubernetes.io/projected/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-kube-api-access-sch85\") pod \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\" (UID: \"8c2ab31d-7ded-43cc-8801-9ac16d4419b2\") " Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.584940 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-kube-api-access-sch85" (OuterVolumeSpecName: "kube-api-access-sch85") pod "8c2ab31d-7ded-43cc-8801-9ac16d4419b2" (UID: "8c2ab31d-7ded-43cc-8801-9ac16d4419b2"). InnerVolumeSpecName "kube-api-access-sch85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.585410 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8c2ab31d-7ded-43cc-8801-9ac16d4419b2" (UID: "8c2ab31d-7ded-43cc-8801-9ac16d4419b2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.603294 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8c2ab31d-7ded-43cc-8801-9ac16d4419b2" (UID: "8c2ab31d-7ded-43cc-8801-9ac16d4419b2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.604834 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-inventory" (OuterVolumeSpecName: "inventory") pod "8c2ab31d-7ded-43cc-8801-9ac16d4419b2" (UID: "8c2ab31d-7ded-43cc-8801-9ac16d4419b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.608760 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c2ab31d-7ded-43cc-8801-9ac16d4419b2" (UID: "8c2ab31d-7ded-43cc-8801-9ac16d4419b2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.682775 4680 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.682808 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.682818 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sch85\" (UniqueName: \"kubernetes.io/projected/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-kube-api-access-sch85\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.682827 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:46 crc kubenswrapper[4680]: I0126 16:43:46.682835 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2ab31d-7ded-43cc-8801-9ac16d4419b2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.061654 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" event={"ID":"8c2ab31d-7ded-43cc-8801-9ac16d4419b2","Type":"ContainerDied","Data":"e6ecf4463b755657038f3f7d013c3f27c5363f20c6e141357f882b64dedffa73"} Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.062581 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ecf4463b755657038f3f7d013c3f27c5363f20c6e141357f882b64dedffa73" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.062704 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sctjf" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.135227 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8"] Jan 26 16:43:47 crc kubenswrapper[4680]: E0126 16:43:47.136007 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2ab31d-7ded-43cc-8801-9ac16d4419b2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.136122 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2ab31d-7ded-43cc-8801-9ac16d4419b2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.136732 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2ab31d-7ded-43cc-8801-9ac16d4419b2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.137627 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.149843 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.149963 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.150050 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.150239 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.150358 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.150463 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.155885 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8"] Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.295982 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.296108 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fj8\" (UniqueName: \"kubernetes.io/projected/2a2b063d-6b94-47f0-b897-9210ad60ca4d-kube-api-access-54fj8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.296250 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.296336 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.296366 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.296411 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.398380 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.398434 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.398483 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.398555 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.398603 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fj8\" (UniqueName: \"kubernetes.io/projected/2a2b063d-6b94-47f0-b897-9210ad60ca4d-kube-api-access-54fj8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.398685 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.403206 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.403906 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.404487 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.406923 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.411917 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.423055 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fj8\" (UniqueName: \"kubernetes.io/projected/2a2b063d-6b94-47f0-b897-9210ad60ca4d-kube-api-access-54fj8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:47 crc kubenswrapper[4680]: I0126 16:43:47.457808 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:43:48 crc kubenswrapper[4680]: I0126 16:43:48.008981 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8"] Jan 26 16:43:48 crc kubenswrapper[4680]: W0126 16:43:48.012687 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a2b063d_6b94_47f0_b897_9210ad60ca4d.slice/crio-fe1b25c42b71099f32c652643692d17d7c830ddf9aa758ba36896e187732212e WatchSource:0}: Error finding container fe1b25c42b71099f32c652643692d17d7c830ddf9aa758ba36896e187732212e: Status 404 returned error can't find the container with id fe1b25c42b71099f32c652643692d17d7c830ddf9aa758ba36896e187732212e Jan 26 16:43:48 crc kubenswrapper[4680]: I0126 16:43:48.069429 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" event={"ID":"2a2b063d-6b94-47f0-b897-9210ad60ca4d","Type":"ContainerStarted","Data":"fe1b25c42b71099f32c652643692d17d7c830ddf9aa758ba36896e187732212e"} Jan 26 16:43:49 crc kubenswrapper[4680]: I0126 16:43:49.104717 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" event={"ID":"2a2b063d-6b94-47f0-b897-9210ad60ca4d","Type":"ContainerStarted","Data":"9abe744910dc70cd76cc9a7c97a3f898ff00f60026a7993a2e58021711ea5a8f"} Jan 26 16:43:49 crc kubenswrapper[4680]: I0126 16:43:49.133967 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" podStartSLOduration=1.6866965550000002 podStartE2EDuration="2.133948634s" podCreationTimestamp="2026-01-26 16:43:47 +0000 UTC" firstStartedPulling="2026-01-26 16:43:48.014924662 +0000 UTC m=+2303.176196931" lastFinishedPulling="2026-01-26 16:43:48.462176751 +0000 UTC m=+2303.623449010" observedRunningTime="2026-01-26 16:43:49.128588832 +0000 UTC m=+2304.289861111" watchObservedRunningTime="2026-01-26 16:43:49.133948634 +0000 UTC m=+2304.295220903" Jan 26 16:44:16 crc kubenswrapper[4680]: I0126 16:44:16.982375 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:44:16 crc kubenswrapper[4680]: I0126 16:44:16.982925 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:44:45 crc kubenswrapper[4680]: I0126 16:44:45.731214 4680 generic.go:334] "Generic (PLEG): container finished" podID="2a2b063d-6b94-47f0-b897-9210ad60ca4d" containerID="9abe744910dc70cd76cc9a7c97a3f898ff00f60026a7993a2e58021711ea5a8f" exitCode=0 Jan 26 16:44:45 crc kubenswrapper[4680]: I0126 16:44:45.731330 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" event={"ID":"2a2b063d-6b94-47f0-b897-9210ad60ca4d","Type":"ContainerDied","Data":"9abe744910dc70cd76cc9a7c97a3f898ff00f60026a7993a2e58021711ea5a8f"} Jan 26 16:44:46 crc kubenswrapper[4680]: I0126 16:44:46.981382 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:44:46 crc kubenswrapper[4680]: I0126 16:44:46.981769 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.190897 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.310763 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-inventory\") pod \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.310857 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-metadata-combined-ca-bundle\") pod \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.310938 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-ssh-key-openstack-edpm-ipam\") pod \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.311021 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.311131 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54fj8\" (UniqueName: \"kubernetes.io/projected/2a2b063d-6b94-47f0-b897-9210ad60ca4d-kube-api-access-54fj8\") pod \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.311169 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-nova-metadata-neutron-config-0\") pod \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\" (UID: \"2a2b063d-6b94-47f0-b897-9210ad60ca4d\") " Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.316611 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2a2b063d-6b94-47f0-b897-9210ad60ca4d" (UID: "2a2b063d-6b94-47f0-b897-9210ad60ca4d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.319297 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2b063d-6b94-47f0-b897-9210ad60ca4d-kube-api-access-54fj8" (OuterVolumeSpecName: "kube-api-access-54fj8") pod "2a2b063d-6b94-47f0-b897-9210ad60ca4d" (UID: "2a2b063d-6b94-47f0-b897-9210ad60ca4d"). InnerVolumeSpecName "kube-api-access-54fj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.345600 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2a2b063d-6b94-47f0-b897-9210ad60ca4d" (UID: "2a2b063d-6b94-47f0-b897-9210ad60ca4d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.346688 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2a2b063d-6b94-47f0-b897-9210ad60ca4d" (UID: "2a2b063d-6b94-47f0-b897-9210ad60ca4d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.347698 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2a2b063d-6b94-47f0-b897-9210ad60ca4d" (UID: "2a2b063d-6b94-47f0-b897-9210ad60ca4d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.355122 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-inventory" (OuterVolumeSpecName: "inventory") pod "2a2b063d-6b94-47f0-b897-9210ad60ca4d" (UID: "2a2b063d-6b94-47f0-b897-9210ad60ca4d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.414041 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.414100 4680 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.414117 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.414132 4680 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.414145 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54fj8\" (UniqueName: \"kubernetes.io/projected/2a2b063d-6b94-47f0-b897-9210ad60ca4d-kube-api-access-54fj8\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.414159 4680 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2a2b063d-6b94-47f0-b897-9210ad60ca4d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.759736 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" event={"ID":"2a2b063d-6b94-47f0-b897-9210ad60ca4d","Type":"ContainerDied","Data":"fe1b25c42b71099f32c652643692d17d7c830ddf9aa758ba36896e187732212e"} Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.759776 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1b25c42b71099f32c652643692d17d7c830ddf9aa758ba36896e187732212e" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.759772 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6ptx8" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.884409 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5"] Jan 26 16:44:47 crc kubenswrapper[4680]: E0126 16:44:47.884798 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a2b063d-6b94-47f0-b897-9210ad60ca4d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.884816 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2b063d-6b94-47f0-b897-9210ad60ca4d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.884982 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2b063d-6b94-47f0-b897-9210ad60ca4d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.885673 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.887707 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.887816 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.887975 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.890819 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.891958 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:44:47 crc kubenswrapper[4680]: I0126 16:44:47.905417 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5"] Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.030316 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56qk\" (UniqueName: \"kubernetes.io/projected/0175fffd-c983-48de-876d-5aff516619aa-kube-api-access-n56qk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.030384 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.030415 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.030476 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.030525 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.132420 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56qk\" (UniqueName: \"kubernetes.io/projected/0175fffd-c983-48de-876d-5aff516619aa-kube-api-access-n56qk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.132496 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.132526 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.132589 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.132628 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.137870 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.138386 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.138972 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.139711 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.149587 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56qk\" (UniqueName: \"kubernetes.io/projected/0175fffd-c983-48de-876d-5aff516619aa-kube-api-access-n56qk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6frz5\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.201043 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:44:48 crc kubenswrapper[4680]: I0126 16:44:48.788213 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5"] Jan 26 16:44:49 crc kubenswrapper[4680]: I0126 16:44:49.788629 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" event={"ID":"0175fffd-c983-48de-876d-5aff516619aa","Type":"ContainerStarted","Data":"55c88161d1ce9ccbf9c7f3fedca777dfb97eecc1ad31e91209595b6bbdca6ff7"} Jan 26 16:44:50 crc kubenswrapper[4680]: I0126 16:44:50.798091 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" event={"ID":"0175fffd-c983-48de-876d-5aff516619aa","Type":"ContainerStarted","Data":"838a3a481b240db89987835fb59877f72449c291f22f2f7a60f95addd8543ec8"} Jan 26 16:44:50 crc kubenswrapper[4680]: I0126 16:44:50.821892 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" podStartSLOduration=2.696023564 podStartE2EDuration="3.821871319s" podCreationTimestamp="2026-01-26 16:44:47 +0000 UTC" firstStartedPulling="2026-01-26 16:44:48.791854251 +0000 UTC m=+2363.953126530" lastFinishedPulling="2026-01-26 16:44:49.917702016 +0000 UTC m=+2365.078974285" observedRunningTime="2026-01-26 16:44:50.815297072 +0000 UTC m=+2365.976569341" watchObservedRunningTime="2026-01-26 16:44:50.821871319 +0000 UTC m=+2365.983143588" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.149666 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9"] Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.153291 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.160063 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.162515 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.195265 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m22\" (UniqueName: \"kubernetes.io/projected/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-kube-api-access-29m22\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.196441 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-config-volume\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.196641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-secret-volume\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.197122 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9"] Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.298543 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m22\" (UniqueName: \"kubernetes.io/projected/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-kube-api-access-29m22\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.298838 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-config-volume\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.298992 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-secret-volume\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.299853 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-config-volume\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.304676 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-secret-volume\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.317755 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m22\" (UniqueName: \"kubernetes.io/projected/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-kube-api-access-29m22\") pod \"collect-profiles-29490765-8ztv9\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.471188 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:00 crc kubenswrapper[4680]: W0126 16:45:00.877249 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacb34efa_8f64_441f_a978_ce7ff6b1f8d8.slice/crio-5e3ec808fe0156d30dcc07c8fba0ffbd21155519c849accf883e65454874c601 WatchSource:0}: Error finding container 5e3ec808fe0156d30dcc07c8fba0ffbd21155519c849accf883e65454874c601: Status 404 returned error can't find the container with id 5e3ec808fe0156d30dcc07c8fba0ffbd21155519c849accf883e65454874c601 Jan 26 16:45:00 crc kubenswrapper[4680]: I0126 16:45:00.888210 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9"] Jan 26 16:45:01 crc kubenswrapper[4680]: I0126 16:45:01.888958 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" event={"ID":"acb34efa-8f64-441f-a978-ce7ff6b1f8d8","Type":"ContainerStarted","Data":"d474e2f6a8c38d0a3f949002b11f1f72fbef4b6a9b7f7fa9fda7dead4f65eb3e"} Jan 26 16:45:01 crc kubenswrapper[4680]: I0126 16:45:01.889337 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" event={"ID":"acb34efa-8f64-441f-a978-ce7ff6b1f8d8","Type":"ContainerStarted","Data":"5e3ec808fe0156d30dcc07c8fba0ffbd21155519c849accf883e65454874c601"} Jan 26 16:45:02 crc kubenswrapper[4680]: I0126 16:45:02.898452 4680 generic.go:334] "Generic (PLEG): container finished" podID="acb34efa-8f64-441f-a978-ce7ff6b1f8d8" containerID="d474e2f6a8c38d0a3f949002b11f1f72fbef4b6a9b7f7fa9fda7dead4f65eb3e" exitCode=0 Jan 26 16:45:02 crc kubenswrapper[4680]: I0126 16:45:02.898498 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" event={"ID":"acb34efa-8f64-441f-a978-ce7ff6b1f8d8","Type":"ContainerDied","Data":"d474e2f6a8c38d0a3f949002b11f1f72fbef4b6a9b7f7fa9fda7dead4f65eb3e"} Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.260413 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.279871 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-config-volume\") pod \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.279928 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m22\" (UniqueName: \"kubernetes.io/projected/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-kube-api-access-29m22\") pod \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.279987 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-secret-volume\") pod \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\" (UID: \"acb34efa-8f64-441f-a978-ce7ff6b1f8d8\") " Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.281586 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "acb34efa-8f64-441f-a978-ce7ff6b1f8d8" (UID: "acb34efa-8f64-441f-a978-ce7ff6b1f8d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.285788 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "acb34efa-8f64-441f-a978-ce7ff6b1f8d8" (UID: "acb34efa-8f64-441f-a978-ce7ff6b1f8d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.290356 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-kube-api-access-29m22" (OuterVolumeSpecName: "kube-api-access-29m22") pod "acb34efa-8f64-441f-a978-ce7ff6b1f8d8" (UID: "acb34efa-8f64-441f-a978-ce7ff6b1f8d8"). InnerVolumeSpecName "kube-api-access-29m22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.382317 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.382352 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m22\" (UniqueName: \"kubernetes.io/projected/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-kube-api-access-29m22\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.382365 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/acb34efa-8f64-441f-a978-ce7ff6b1f8d8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.914234 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" event={"ID":"acb34efa-8f64-441f-a978-ce7ff6b1f8d8","Type":"ContainerDied","Data":"5e3ec808fe0156d30dcc07c8fba0ffbd21155519c849accf883e65454874c601"} Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.914561 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3ec808fe0156d30dcc07c8fba0ffbd21155519c849accf883e65454874c601" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.914331 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9" Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.984501 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft"] Jan 26 16:45:04 crc kubenswrapper[4680]: I0126 16:45:04.995619 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-mr5ft"] Jan 26 16:45:05 crc kubenswrapper[4680]: I0126 16:45:05.183562 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52101413-4b6f-4b34-bbfc-27d16b75b2a1" path="/var/lib/kubelet/pods/52101413-4b6f-4b34-bbfc-27d16b75b2a1/volumes" Jan 26 16:45:16 crc kubenswrapper[4680]: I0126 16:45:16.981129 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:45:16 crc kubenswrapper[4680]: I0126 16:45:16.981694 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:45:16 crc kubenswrapper[4680]: I0126 16:45:16.981747 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:45:16 crc kubenswrapper[4680]: I0126 16:45:16.982503 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:45:16 crc kubenswrapper[4680]: I0126 16:45:16.982563 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" gracePeriod=600 Jan 26 16:45:17 crc kubenswrapper[4680]: E0126 16:45:17.606840 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:45:18 crc kubenswrapper[4680]: I0126 16:45:18.039641 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" exitCode=0 Jan 26 16:45:18 crc kubenswrapper[4680]: I0126 16:45:18.039737 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413"} Jan 26 16:45:18 crc kubenswrapper[4680]: I0126 16:45:18.040106 4680 scope.go:117] "RemoveContainer" containerID="2d5778ad3d975241861671ada2343061d63ca99f24cc8b62af57c10230c757bf" Jan 26 16:45:18 crc kubenswrapper[4680]: I0126 16:45:18.040955 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:45:18 crc kubenswrapper[4680]: E0126 16:45:18.041287 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:45:31 crc kubenswrapper[4680]: I0126 16:45:31.169584 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:45:31 crc kubenswrapper[4680]: E0126 16:45:31.171430 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:45:44 crc kubenswrapper[4680]: I0126 16:45:44.169357 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:45:44 crc kubenswrapper[4680]: E0126 16:45:44.170286 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:45:44 crc kubenswrapper[4680]: I0126 16:45:44.262793 4680 scope.go:117] "RemoveContainer" containerID="94100a6452451499c0321432e02101e0a48076e99dd9b1498a45c4952ba7e9a0" Jan 26 16:45:58 crc kubenswrapper[4680]: I0126 16:45:58.169980 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:45:58 crc kubenswrapper[4680]: E0126 16:45:58.170896 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:46:10 crc kubenswrapper[4680]: I0126 16:46:10.170696 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:46:10 crc kubenswrapper[4680]: E0126 16:46:10.171435 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:46:21 crc kubenswrapper[4680]: I0126 16:46:21.170776 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:46:21 crc kubenswrapper[4680]: E0126 16:46:21.171553 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:46:35 crc kubenswrapper[4680]: I0126 16:46:35.191302 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:46:35 crc kubenswrapper[4680]: E0126 16:46:35.193204 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:46:44 crc kubenswrapper[4680]: I0126 16:46:44.312877 4680 scope.go:117] "RemoveContainer" containerID="e2ce9eda7017202b71fb10d6147fa675febf2371b16b2382ad8f71401436e756" Jan 26 16:46:44 crc kubenswrapper[4680]: I0126 16:46:44.349236 4680 scope.go:117] "RemoveContainer" containerID="5820b8df61c1256d863f6ccda2ec46c1363afd4808bbe0f0e71f2d7533c6f0ea" Jan 26 16:46:44 crc kubenswrapper[4680]: I0126 16:46:44.385032 4680 scope.go:117] "RemoveContainer" containerID="47ee27a3f99b4cb08ec0250ead566da68e3a64e0f36f0ac35e7aab62e00bc84b" Jan 26 16:46:47 crc kubenswrapper[4680]: I0126 16:46:47.170382 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:46:47 crc kubenswrapper[4680]: E0126 16:46:47.171239 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:47:02 crc kubenswrapper[4680]: I0126 16:47:02.170271 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:47:02 crc kubenswrapper[4680]: E0126 16:47:02.171529 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:47:14 crc kubenswrapper[4680]: I0126 16:47:14.169814 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:47:14 crc kubenswrapper[4680]: E0126 16:47:14.170690 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:47:27 crc kubenswrapper[4680]: I0126 16:47:27.170089 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:47:27 crc kubenswrapper[4680]: E0126 16:47:27.170946 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:47:39 crc kubenswrapper[4680]: I0126 16:47:39.169807 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:47:39 crc kubenswrapper[4680]: E0126 16:47:39.170477 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:47:44 crc kubenswrapper[4680]: I0126 16:47:44.444853 4680 scope.go:117] "RemoveContainer" containerID="f982d086186b1aa68e7bc2d08ea20356bc5e0c66652aab312150bbb4b5235c99" Jan 26 16:47:44 crc kubenswrapper[4680]: I0126 16:47:44.485012 4680 scope.go:117] "RemoveContainer" containerID="881d7b0dbc7ddd9cbbe2b3a91faaa25c5f5227cb3828309708808ad25d8d8e4c" Jan 26 16:47:44 crc kubenswrapper[4680]: I0126 16:47:44.532130 4680 scope.go:117] "RemoveContainer" containerID="385f086bc341f6c47a1041d64161f559acc3bd4e04f468b9f9854deec29bd840" Jan 26 16:47:52 crc kubenswrapper[4680]: I0126 16:47:52.169866 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:47:52 crc kubenswrapper[4680]: E0126 16:47:52.170691 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:48:03 crc kubenswrapper[4680]: I0126 16:48:03.169961 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:48:03 crc kubenswrapper[4680]: E0126 16:48:03.170770 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:48:16 crc kubenswrapper[4680]: I0126 16:48:16.171943 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:48:16 crc kubenswrapper[4680]: E0126 16:48:16.172812 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:48:29 crc kubenswrapper[4680]: I0126 16:48:29.170021 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:48:29 crc kubenswrapper[4680]: E0126 16:48:29.170857 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:48:43 crc kubenswrapper[4680]: I0126 16:48:43.170139 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:48:43 crc kubenswrapper[4680]: E0126 16:48:43.171963 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:48:54 crc kubenswrapper[4680]: I0126 16:48:54.170401 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:48:54 crc kubenswrapper[4680]: E0126 16:48:54.171202 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:49:07 crc kubenswrapper[4680]: I0126 16:49:07.169393 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:49:07 crc kubenswrapper[4680]: E0126 16:49:07.170177 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:49:22 crc kubenswrapper[4680]: I0126 16:49:22.170112 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:49:22 crc kubenswrapper[4680]: E0126 16:49:22.171506 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:49:36 crc kubenswrapper[4680]: I0126 16:49:36.169603 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:49:36 crc kubenswrapper[4680]: E0126 16:49:36.171379 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:49:51 crc kubenswrapper[4680]: I0126 16:49:51.173050 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:49:51 crc kubenswrapper[4680]: E0126 16:49:51.175911 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:50:06 crc kubenswrapper[4680]: I0126 16:50:06.169797 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:50:06 crc kubenswrapper[4680]: E0126 16:50:06.170666 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:50:20 crc kubenswrapper[4680]: I0126 16:50:20.169415 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:50:20 crc kubenswrapper[4680]: I0126 16:50:20.570525 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"ea35982f5bc27f6b837d95837935b9694ae238dc68bb27db0722d6c275437a39"} Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.910589 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2t7s"] Jan 26 16:50:21 crc kubenswrapper[4680]: E0126 16:50:21.911653 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb34efa-8f64-441f-a978-ce7ff6b1f8d8" containerName="collect-profiles" Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.911671 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb34efa-8f64-441f-a978-ce7ff6b1f8d8" containerName="collect-profiles" Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.911936 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb34efa-8f64-441f-a978-ce7ff6b1f8d8" containerName="collect-profiles" Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.913522 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.930219 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2t7s"] Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.971534 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkr2t\" (UniqueName: \"kubernetes.io/projected/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-kube-api-access-pkr2t\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.971899 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-catalog-content\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:21 crc kubenswrapper[4680]: I0126 16:50:21.971968 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-utilities\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.073316 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-catalog-content\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.073401 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-utilities\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.073487 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkr2t\" (UniqueName: \"kubernetes.io/projected/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-kube-api-access-pkr2t\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.074013 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-utilities\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.074014 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-catalog-content\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.097038 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkr2t\" (UniqueName: \"kubernetes.io/projected/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-kube-api-access-pkr2t\") pod \"redhat-operators-t2t7s\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.248302 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:22 crc kubenswrapper[4680]: I0126 16:50:22.751751 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2t7s"] Jan 26 16:50:23 crc kubenswrapper[4680]: I0126 16:50:23.595709 4680 generic.go:334] "Generic (PLEG): container finished" podID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerID="229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b" exitCode=0 Jan 26 16:50:23 crc kubenswrapper[4680]: I0126 16:50:23.595791 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerDied","Data":"229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b"} Jan 26 16:50:23 crc kubenswrapper[4680]: I0126 16:50:23.596090 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerStarted","Data":"bc2689f98bb1716195f193c37b189bcc05dc4009917cc97df865d323671b751d"} Jan 26 16:50:23 crc kubenswrapper[4680]: I0126 16:50:23.597808 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:50:26 crc kubenswrapper[4680]: I0126 16:50:26.619059 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerStarted","Data":"d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6"} Jan 26 16:50:33 crc kubenswrapper[4680]: I0126 16:50:33.678220 4680 generic.go:334] "Generic (PLEG): container finished" podID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerID="d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6" exitCode=0 Jan 26 16:50:33 crc kubenswrapper[4680]: I0126 16:50:33.678301 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerDied","Data":"d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6"} Jan 26 16:50:34 crc kubenswrapper[4680]: I0126 16:50:34.689624 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerStarted","Data":"fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d"} Jan 26 16:50:34 crc kubenswrapper[4680]: I0126 16:50:34.715593 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2t7s" podStartSLOduration=2.995787296 podStartE2EDuration="13.715570695s" podCreationTimestamp="2026-01-26 16:50:21 +0000 UTC" firstStartedPulling="2026-01-26 16:50:23.597546139 +0000 UTC m=+2698.758818408" lastFinishedPulling="2026-01-26 16:50:34.317329538 +0000 UTC m=+2709.478601807" observedRunningTime="2026-01-26 16:50:34.71046124 +0000 UTC m=+2709.871733509" watchObservedRunningTime="2026-01-26 16:50:34.715570695 +0000 UTC m=+2709.876842964" Jan 26 16:50:40 crc kubenswrapper[4680]: I0126 16:50:40.758015 4680 generic.go:334] "Generic (PLEG): container finished" podID="0175fffd-c983-48de-876d-5aff516619aa" containerID="838a3a481b240db89987835fb59877f72449c291f22f2f7a60f95addd8543ec8" exitCode=0 Jan 26 16:50:40 crc kubenswrapper[4680]: I0126 16:50:40.758515 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" event={"ID":"0175fffd-c983-48de-876d-5aff516619aa","Type":"ContainerDied","Data":"838a3a481b240db89987835fb59877f72449c291f22f2f7a60f95addd8543ec8"} Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.248767 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.249308 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.347269 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.525827 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-combined-ca-bundle\") pod \"0175fffd-c983-48de-876d-5aff516619aa\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.526086 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n56qk\" (UniqueName: \"kubernetes.io/projected/0175fffd-c983-48de-876d-5aff516619aa-kube-api-access-n56qk\") pod \"0175fffd-c983-48de-876d-5aff516619aa\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.526140 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-ssh-key-openstack-edpm-ipam\") pod \"0175fffd-c983-48de-876d-5aff516619aa\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.526184 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0\") pod \"0175fffd-c983-48de-876d-5aff516619aa\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.526218 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-inventory\") pod \"0175fffd-c983-48de-876d-5aff516619aa\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.556279 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0175fffd-c983-48de-876d-5aff516619aa" (UID: "0175fffd-c983-48de-876d-5aff516619aa"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.576340 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0175fffd-c983-48de-876d-5aff516619aa-kube-api-access-n56qk" (OuterVolumeSpecName: "kube-api-access-n56qk") pod "0175fffd-c983-48de-876d-5aff516619aa" (UID: "0175fffd-c983-48de-876d-5aff516619aa"). InnerVolumeSpecName "kube-api-access-n56qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.605306 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-inventory" (OuterVolumeSpecName: "inventory") pod "0175fffd-c983-48de-876d-5aff516619aa" (UID: "0175fffd-c983-48de-876d-5aff516619aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.630031 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0175fffd-c983-48de-876d-5aff516619aa" (UID: "0175fffd-c983-48de-876d-5aff516619aa"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.630330 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0\") pod \"0175fffd-c983-48de-876d-5aff516619aa\" (UID: \"0175fffd-c983-48de-876d-5aff516619aa\") " Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.630966 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n56qk\" (UniqueName: \"kubernetes.io/projected/0175fffd-c983-48de-876d-5aff516619aa-kube-api-access-n56qk\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.630988 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.631004 4680 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:42 crc kubenswrapper[4680]: W0126 16:50:42.636261 4680 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0175fffd-c983-48de-876d-5aff516619aa/volumes/kubernetes.io~secret/libvirt-secret-0 Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.644167 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0175fffd-c983-48de-876d-5aff516619aa" (UID: "0175fffd-c983-48de-876d-5aff516619aa"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.708039 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0175fffd-c983-48de-876d-5aff516619aa" (UID: "0175fffd-c983-48de-876d-5aff516619aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.734423 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.734469 4680 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0175fffd-c983-48de-876d-5aff516619aa-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.777323 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" event={"ID":"0175fffd-c983-48de-876d-5aff516619aa","Type":"ContainerDied","Data":"55c88161d1ce9ccbf9c7f3fedca777dfb97eecc1ad31e91209595b6bbdca6ff7"} Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.777366 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c88161d1ce9ccbf9c7f3fedca777dfb97eecc1ad31e91209595b6bbdca6ff7" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.777422 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6frz5" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.927968 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5"] Jan 26 16:50:42 crc kubenswrapper[4680]: E0126 16:50:42.928510 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0175fffd-c983-48de-876d-5aff516619aa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.928533 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0175fffd-c983-48de-876d-5aff516619aa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.928742 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0175fffd-c983-48de-876d-5aff516619aa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:50:42 crc kubenswrapper[4680]: E0126 16:50:42.929559 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0175fffd_c983_48de_876d_5aff516619aa.slice/crio-55c88161d1ce9ccbf9c7f3fedca777dfb97eecc1ad31e91209595b6bbdca6ff7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0175fffd_c983_48de_876d_5aff516619aa.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.929614 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.936276 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.936669 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.936840 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.936998 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.937195 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.937263 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.941181 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 16:50:42 crc kubenswrapper[4680]: I0126 16:50:42.954946 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5"] Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.042445 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.042849 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043022 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043193 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043277 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svp9h\" (UniqueName: \"kubernetes.io/projected/145ac4ae-c975-4032-bc83-94b0fe7abb2b-kube-api-access-svp9h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043373 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043448 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043506 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.043578 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144762 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144825 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144848 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144876 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144909 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144928 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.144995 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.145033 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.145081 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svp9h\" (UniqueName: \"kubernetes.io/projected/145ac4ae-c975-4032-bc83-94b0fe7abb2b-kube-api-access-svp9h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.146815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.151979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.152204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.152374 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.152813 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.153009 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.153229 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.155294 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.167803 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svp9h\" (UniqueName: \"kubernetes.io/projected/145ac4ae-c975-4032-bc83-94b0fe7abb2b-kube-api-access-svp9h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr2g5\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.258927 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.327810 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2t7s" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="registry-server" probeResult="failure" output=< Jan 26 16:50:43 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 16:50:43 crc kubenswrapper[4680]: > Jan 26 16:50:43 crc kubenswrapper[4680]: I0126 16:50:43.978036 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5"] Jan 26 16:50:44 crc kubenswrapper[4680]: I0126 16:50:44.793827 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" event={"ID":"145ac4ae-c975-4032-bc83-94b0fe7abb2b","Type":"ContainerStarted","Data":"9ad6c84b04aee795ec80917e113b52ee1ad1afffa095fcede752da1454401592"} Jan 26 16:50:44 crc kubenswrapper[4680]: I0126 16:50:44.794155 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" event={"ID":"145ac4ae-c975-4032-bc83-94b0fe7abb2b","Type":"ContainerStarted","Data":"8a75f5c7b103ca28bbf7a4bd4dd6f4ed868b4f58efa7e0c12957f5b6c6997943"} Jan 26 16:50:52 crc kubenswrapper[4680]: I0126 16:50:52.292267 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:52 crc kubenswrapper[4680]: I0126 16:50:52.310210 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" podStartSLOduration=9.890640319 podStartE2EDuration="10.310193829s" podCreationTimestamp="2026-01-26 16:50:42 +0000 UTC" firstStartedPulling="2026-01-26 16:50:43.96223801 +0000 UTC m=+2719.123510279" lastFinishedPulling="2026-01-26 16:50:44.38179152 +0000 UTC m=+2719.543063789" observedRunningTime="2026-01-26 16:50:44.821417299 +0000 UTC m=+2719.982689588" watchObservedRunningTime="2026-01-26 16:50:52.310193829 +0000 UTC m=+2727.471466098" Jan 26 16:50:52 crc kubenswrapper[4680]: I0126 16:50:52.343442 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:53 crc kubenswrapper[4680]: I0126 16:50:53.116586 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2t7s"] Jan 26 16:50:53 crc kubenswrapper[4680]: I0126 16:50:53.874030 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2t7s" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="registry-server" containerID="cri-o://fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d" gracePeriod=2 Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.423851 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.620764 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkr2t\" (UniqueName: \"kubernetes.io/projected/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-kube-api-access-pkr2t\") pod \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.620936 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-utilities\") pod \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.621059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-catalog-content\") pod \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\" (UID: \"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d\") " Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.621554 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-utilities" (OuterVolumeSpecName: "utilities") pod "c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" (UID: "c5ae8d9a-a606-477b-90a5-9055a9ecfe8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.621975 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.638310 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-kube-api-access-pkr2t" (OuterVolumeSpecName: "kube-api-access-pkr2t") pod "c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" (UID: "c5ae8d9a-a606-477b-90a5-9055a9ecfe8d"). InnerVolumeSpecName "kube-api-access-pkr2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.724128 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkr2t\" (UniqueName: \"kubernetes.io/projected/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-kube-api-access-pkr2t\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.744160 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" (UID: "c5ae8d9a-a606-477b-90a5-9055a9ecfe8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.826390 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.882105 4680 generic.go:334] "Generic (PLEG): container finished" podID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerID="fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d" exitCode=0 Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.882164 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2t7s" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.882153 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerDied","Data":"fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d"} Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.882305 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2t7s" event={"ID":"c5ae8d9a-a606-477b-90a5-9055a9ecfe8d","Type":"ContainerDied","Data":"bc2689f98bb1716195f193c37b189bcc05dc4009917cc97df865d323671b751d"} Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.882332 4680 scope.go:117] "RemoveContainer" containerID="fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.909323 4680 scope.go:117] "RemoveContainer" containerID="d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.917503 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2t7s"] Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.928911 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2t7s"] Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.934532 4680 scope.go:117] "RemoveContainer" containerID="229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.984341 4680 scope.go:117] "RemoveContainer" containerID="fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d" Jan 26 16:50:54 crc kubenswrapper[4680]: E0126 16:50:54.985906 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d\": container with ID starting with fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d not found: ID does not exist" containerID="fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.985948 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d"} err="failed to get container status \"fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d\": rpc error: code = NotFound desc = could not find container \"fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d\": container with ID starting with fe6903c3289dab13f15ad87d9c242195160fd8516bc5f63c362785f547b1723d not found: ID does not exist" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.985978 4680 scope.go:117] "RemoveContainer" containerID="d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6" Jan 26 16:50:54 crc kubenswrapper[4680]: E0126 16:50:54.986468 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6\": container with ID starting with d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6 not found: ID does not exist" containerID="d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.986602 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6"} err="failed to get container status \"d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6\": rpc error: code = NotFound desc = could not find container \"d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6\": container with ID starting with d2618c8809e95310e2f5f609b52310772211bf03f694dcd3b677401bb0919cd6 not found: ID does not exist" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.986715 4680 scope.go:117] "RemoveContainer" containerID="229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b" Jan 26 16:50:54 crc kubenswrapper[4680]: E0126 16:50:54.987181 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b\": container with ID starting with 229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b not found: ID does not exist" containerID="229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b" Jan 26 16:50:54 crc kubenswrapper[4680]: I0126 16:50:54.987207 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b"} err="failed to get container status \"229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b\": rpc error: code = NotFound desc = could not find container \"229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b\": container with ID starting with 229aeab51a3928fd4690575ceaaf4db5bb2dd9f519203786f960b886b2a9922b not found: ID does not exist" Jan 26 16:50:55 crc kubenswrapper[4680]: I0126 16:50:55.180738 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" path="/var/lib/kubelet/pods/c5ae8d9a-a606-477b-90a5-9055a9ecfe8d/volumes" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.876125 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j7fd9"] Jan 26 16:51:47 crc kubenswrapper[4680]: E0126 16:51:47.878557 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="extract-utilities" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.878659 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="extract-utilities" Jan 26 16:51:47 crc kubenswrapper[4680]: E0126 16:51:47.878736 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="extract-content" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.878795 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="extract-content" Jan 26 16:51:47 crc kubenswrapper[4680]: E0126 16:51:47.878870 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="registry-server" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.878926 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="registry-server" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.879459 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae8d9a-a606-477b-90a5-9055a9ecfe8d" containerName="registry-server" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.883056 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7fd9"] Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.883343 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.961705 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9th7\" (UniqueName: \"kubernetes.io/projected/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-kube-api-access-h9th7\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.961814 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-utilities\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:47 crc kubenswrapper[4680]: I0126 16:51:47.961862 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-catalog-content\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.063783 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9th7\" (UniqueName: \"kubernetes.io/projected/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-kube-api-access-h9th7\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.063905 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-utilities\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.063955 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-catalog-content\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.064503 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-catalog-content\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.064616 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-utilities\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.084409 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9th7\" (UniqueName: \"kubernetes.io/projected/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-kube-api-access-h9th7\") pod \"certified-operators-j7fd9\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.211435 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:48 crc kubenswrapper[4680]: I0126 16:51:48.808479 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7fd9"] Jan 26 16:51:49 crc kubenswrapper[4680]: I0126 16:51:49.309296 4680 generic.go:334] "Generic (PLEG): container finished" podID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerID="59f3e17e2fd10cf4ac68cf16ab27cbbed8000224f30408a043bbe3aa4fe67e80" exitCode=0 Jan 26 16:51:49 crc kubenswrapper[4680]: I0126 16:51:49.309350 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7fd9" event={"ID":"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67","Type":"ContainerDied","Data":"59f3e17e2fd10cf4ac68cf16ab27cbbed8000224f30408a043bbe3aa4fe67e80"} Jan 26 16:51:49 crc kubenswrapper[4680]: I0126 16:51:49.309780 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7fd9" event={"ID":"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67","Type":"ContainerStarted","Data":"a2be0bb5de990ca4a2f3badbc1ac997cf4496c466047d32c87586eb73e230d34"} Jan 26 16:51:51 crc kubenswrapper[4680]: I0126 16:51:51.328976 4680 generic.go:334] "Generic (PLEG): container finished" podID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerID="4aa82d2d9d21b811dbacd1ea660d7b756e52e7fcc7d396d89af328e85c7ccd73" exitCode=0 Jan 26 16:51:51 crc kubenswrapper[4680]: I0126 16:51:51.329042 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7fd9" event={"ID":"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67","Type":"ContainerDied","Data":"4aa82d2d9d21b811dbacd1ea660d7b756e52e7fcc7d396d89af328e85c7ccd73"} Jan 26 16:51:54 crc kubenswrapper[4680]: I0126 16:51:54.356107 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7fd9" event={"ID":"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67","Type":"ContainerStarted","Data":"c405d025a3bb8844719b37cca24c35f5d89a4fd047bfb3838373ea3130a7a877"} Jan 26 16:51:54 crc kubenswrapper[4680]: I0126 16:51:54.381102 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j7fd9" podStartSLOduration=3.027371182 podStartE2EDuration="7.3810648s" podCreationTimestamp="2026-01-26 16:51:47 +0000 UTC" firstStartedPulling="2026-01-26 16:51:49.310974027 +0000 UTC m=+2784.472246296" lastFinishedPulling="2026-01-26 16:51:53.664667645 +0000 UTC m=+2788.825939914" observedRunningTime="2026-01-26 16:51:54.374630308 +0000 UTC m=+2789.535902577" watchObservedRunningTime="2026-01-26 16:51:54.3810648 +0000 UTC m=+2789.542337069" Jan 26 16:51:58 crc kubenswrapper[4680]: I0126 16:51:58.213838 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:58 crc kubenswrapper[4680]: I0126 16:51:58.214445 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:58 crc kubenswrapper[4680]: I0126 16:51:58.265453 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:58 crc kubenswrapper[4680]: I0126 16:51:58.430195 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:51:58 crc kubenswrapper[4680]: I0126 16:51:58.498944 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7fd9"] Jan 26 16:52:00 crc kubenswrapper[4680]: I0126 16:52:00.402244 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j7fd9" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="registry-server" containerID="cri-o://c405d025a3bb8844719b37cca24c35f5d89a4fd047bfb3838373ea3130a7a877" gracePeriod=2 Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.429327 4680 generic.go:334] "Generic (PLEG): container finished" podID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerID="c405d025a3bb8844719b37cca24c35f5d89a4fd047bfb3838373ea3130a7a877" exitCode=0 Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.429504 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7fd9" event={"ID":"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67","Type":"ContainerDied","Data":"c405d025a3bb8844719b37cca24c35f5d89a4fd047bfb3838373ea3130a7a877"} Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.429624 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7fd9" event={"ID":"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67","Type":"ContainerDied","Data":"a2be0bb5de990ca4a2f3badbc1ac997cf4496c466047d32c87586eb73e230d34"} Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.429646 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2be0bb5de990ca4a2f3badbc1ac997cf4496c466047d32c87586eb73e230d34" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.463610 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.487521 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-utilities\") pod \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.487763 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-catalog-content\") pod \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.487816 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9th7\" (UniqueName: \"kubernetes.io/projected/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-kube-api-access-h9th7\") pod \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\" (UID: \"fc337a4c-1a66-4550-9f15-5dd4bd9f8c67\") " Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.489712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-utilities" (OuterVolumeSpecName: "utilities") pod "fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" (UID: "fc337a4c-1a66-4550-9f15-5dd4bd9f8c67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.503337 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-kube-api-access-h9th7" (OuterVolumeSpecName: "kube-api-access-h9th7") pod "fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" (UID: "fc337a4c-1a66-4550-9f15-5dd4bd9f8c67"). InnerVolumeSpecName "kube-api-access-h9th7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.537799 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" (UID: "fc337a4c-1a66-4550-9f15-5dd4bd9f8c67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.588979 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.589020 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:01 crc kubenswrapper[4680]: I0126 16:52:01.589032 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9th7\" (UniqueName: \"kubernetes.io/projected/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67-kube-api-access-h9th7\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:02 crc kubenswrapper[4680]: I0126 16:52:02.436223 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7fd9" Jan 26 16:52:02 crc kubenswrapper[4680]: I0126 16:52:02.468498 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7fd9"] Jan 26 16:52:02 crc kubenswrapper[4680]: I0126 16:52:02.478354 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j7fd9"] Jan 26 16:52:03 crc kubenswrapper[4680]: I0126 16:52:03.179891 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" path="/var/lib/kubelet/pods/fc337a4c-1a66-4550-9f15-5dd4bd9f8c67/volumes" Jan 26 16:52:46 crc kubenswrapper[4680]: I0126 16:52:46.980593 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:52:46 crc kubenswrapper[4680]: I0126 16:52:46.981101 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:53:16 crc kubenswrapper[4680]: I0126 16:53:16.980656 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:53:16 crc kubenswrapper[4680]: I0126 16:53:16.981363 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:53:17 crc kubenswrapper[4680]: I0126 16:53:17.371773 4680 generic.go:334] "Generic (PLEG): container finished" podID="145ac4ae-c975-4032-bc83-94b0fe7abb2b" containerID="9ad6c84b04aee795ec80917e113b52ee1ad1afffa095fcede752da1454401592" exitCode=0 Jan 26 16:53:17 crc kubenswrapper[4680]: I0126 16:53:17.371823 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" event={"ID":"145ac4ae-c975-4032-bc83-94b0fe7abb2b","Type":"ContainerDied","Data":"9ad6c84b04aee795ec80917e113b52ee1ad1afffa095fcede752da1454401592"} Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.748275 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.879853 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-1\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.879901 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-extra-config-0\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.879992 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-0\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.880026 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-inventory\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.880052 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svp9h\" (UniqueName: \"kubernetes.io/projected/145ac4ae-c975-4032-bc83-94b0fe7abb2b-kube-api-access-svp9h\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.880101 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-0\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.880155 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-1\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.880268 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-combined-ca-bundle\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.880304 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-ssh-key-openstack-edpm-ipam\") pod \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\" (UID: \"145ac4ae-c975-4032-bc83-94b0fe7abb2b\") " Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.900445 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145ac4ae-c975-4032-bc83-94b0fe7abb2b-kube-api-access-svp9h" (OuterVolumeSpecName: "kube-api-access-svp9h") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "kube-api-access-svp9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.903477 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.907035 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.915683 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.916062 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.918292 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.919789 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.936263 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.938628 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-inventory" (OuterVolumeSpecName: "inventory") pod "145ac4ae-c975-4032-bc83-94b0fe7abb2b" (UID: "145ac4ae-c975-4032-bc83-94b0fe7abb2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.982844 4680 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983023 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983140 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svp9h\" (UniqueName: \"kubernetes.io/projected/145ac4ae-c975-4032-bc83-94b0fe7abb2b-kube-api-access-svp9h\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983225 4680 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983282 4680 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983346 4680 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983406 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983460 4680 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:18 crc kubenswrapper[4680]: I0126 16:53:18.983521 4680 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/145ac4ae-c975-4032-bc83-94b0fe7abb2b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.386810 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" event={"ID":"145ac4ae-c975-4032-bc83-94b0fe7abb2b","Type":"ContainerDied","Data":"8a75f5c7b103ca28bbf7a4bd4dd6f4ed868b4f58efa7e0c12957f5b6c6997943"} Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.386847 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a75f5c7b103ca28bbf7a4bd4dd6f4ed868b4f58efa7e0c12957f5b6c6997943" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.386858 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr2g5" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.482778 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn"] Jan 26 16:53:19 crc kubenswrapper[4680]: E0126 16:53:19.483267 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145ac4ae-c975-4032-bc83-94b0fe7abb2b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.483293 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="145ac4ae-c975-4032-bc83-94b0fe7abb2b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:53:19 crc kubenswrapper[4680]: E0126 16:53:19.483349 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="extract-content" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.483359 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="extract-content" Jan 26 16:53:19 crc kubenswrapper[4680]: E0126 16:53:19.483366 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="registry-server" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.483375 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="registry-server" Jan 26 16:53:19 crc kubenswrapper[4680]: E0126 16:53:19.483392 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="extract-utilities" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.483400 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="extract-utilities" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.483666 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc337a4c-1a66-4550-9f15-5dd4bd9f8c67" containerName="registry-server" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.483686 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="145ac4ae-c975-4032-bc83-94b0fe7abb2b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.484523 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.487699 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.487930 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.488569 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.488982 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hftwj" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.489273 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.491874 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn"] Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.593290 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rtk\" (UniqueName: \"kubernetes.io/projected/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-kube-api-access-k5rtk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.593355 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.593389 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.594105 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.594155 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.594274 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.594322 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.600079 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2vxct"] Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.602008 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.622481 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vxct"] Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695634 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695765 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695840 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rtk\" (UniqueName: \"kubernetes.io/projected/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-kube-api-access-k5rtk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695866 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.695923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.700925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.701353 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.702148 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.702475 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.707433 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.710753 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.715812 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rtk\" (UniqueName: \"kubernetes.io/projected/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-kube-api-access-k5rtk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.797308 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ss9h\" (UniqueName: \"kubernetes.io/projected/d157bf6b-c186-40de-a61c-9444cfa7e952-kube-api-access-9ss9h\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.797396 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-catalog-content\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.797546 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-utilities\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.805434 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.898829 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ss9h\" (UniqueName: \"kubernetes.io/projected/d157bf6b-c186-40de-a61c-9444cfa7e952-kube-api-access-9ss9h\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.898928 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-catalog-content\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.899055 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-utilities\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.899629 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-utilities\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.899705 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-catalog-content\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.916019 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ss9h\" (UniqueName: \"kubernetes.io/projected/d157bf6b-c186-40de-a61c-9444cfa7e952-kube-api-access-9ss9h\") pod \"redhat-marketplace-2vxct\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:19 crc kubenswrapper[4680]: I0126 16:53:19.921450 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:20 crc kubenswrapper[4680]: I0126 16:53:20.511880 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn"] Jan 26 16:53:20 crc kubenswrapper[4680]: I0126 16:53:20.539279 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vxct"] Jan 26 16:53:20 crc kubenswrapper[4680]: W0126 16:53:20.540913 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd157bf6b_c186_40de_a61c_9444cfa7e952.slice/crio-bfae924b2f2fdeebe305c60dbd02bb2a345089bdf556de34a3b22b28c05941c8 WatchSource:0}: Error finding container bfae924b2f2fdeebe305c60dbd02bb2a345089bdf556de34a3b22b28c05941c8: Status 404 returned error can't find the container with id bfae924b2f2fdeebe305c60dbd02bb2a345089bdf556de34a3b22b28c05941c8 Jan 26 16:53:21 crc kubenswrapper[4680]: I0126 16:53:21.402736 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" event={"ID":"6862087d-7d6e-4bd1-b94f-3f79dc142b7d","Type":"ContainerStarted","Data":"85835300fe9db128367f09950c8db563a4348b8e55ab758c5ce510d1e186ebdb"} Jan 26 16:53:21 crc kubenswrapper[4680]: I0126 16:53:21.404058 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" event={"ID":"6862087d-7d6e-4bd1-b94f-3f79dc142b7d","Type":"ContainerStarted","Data":"327159930de11eecb471fb2f46b7da10e0ecbd5c1ba5d13d426859a0e4c849d4"} Jan 26 16:53:21 crc kubenswrapper[4680]: I0126 16:53:21.405482 4680 generic.go:334] "Generic (PLEG): container finished" podID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerID="33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76" exitCode=0 Jan 26 16:53:21 crc kubenswrapper[4680]: I0126 16:53:21.405573 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerDied","Data":"33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76"} Jan 26 16:53:21 crc kubenswrapper[4680]: I0126 16:53:21.405673 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerStarted","Data":"bfae924b2f2fdeebe305c60dbd02bb2a345089bdf556de34a3b22b28c05941c8"} Jan 26 16:53:21 crc kubenswrapper[4680]: I0126 16:53:21.446552 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" podStartSLOduration=1.8431068000000002 podStartE2EDuration="2.446536367s" podCreationTimestamp="2026-01-26 16:53:19 +0000 UTC" firstStartedPulling="2026-01-26 16:53:20.518102409 +0000 UTC m=+2875.679374678" lastFinishedPulling="2026-01-26 16:53:21.121531976 +0000 UTC m=+2876.282804245" observedRunningTime="2026-01-26 16:53:21.427372745 +0000 UTC m=+2876.588645014" watchObservedRunningTime="2026-01-26 16:53:21.446536367 +0000 UTC m=+2876.607808636" Jan 26 16:53:23 crc kubenswrapper[4680]: I0126 16:53:23.423503 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerStarted","Data":"e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c"} Jan 26 16:53:24 crc kubenswrapper[4680]: I0126 16:53:24.441635 4680 generic.go:334] "Generic (PLEG): container finished" podID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerID="e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c" exitCode=0 Jan 26 16:53:24 crc kubenswrapper[4680]: I0126 16:53:24.441957 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerDied","Data":"e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c"} Jan 26 16:53:25 crc kubenswrapper[4680]: I0126 16:53:25.452983 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerStarted","Data":"0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e"} Jan 26 16:53:29 crc kubenswrapper[4680]: I0126 16:53:29.921930 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:29 crc kubenswrapper[4680]: I0126 16:53:29.922607 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:29 crc kubenswrapper[4680]: I0126 16:53:29.973394 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:29 crc kubenswrapper[4680]: I0126 16:53:29.993106 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2vxct" podStartSLOduration=7.504640169 podStartE2EDuration="10.993087508s" podCreationTimestamp="2026-01-26 16:53:19 +0000 UTC" firstStartedPulling="2026-01-26 16:53:21.406648278 +0000 UTC m=+2876.567920537" lastFinishedPulling="2026-01-26 16:53:24.895095607 +0000 UTC m=+2880.056367876" observedRunningTime="2026-01-26 16:53:26.489332938 +0000 UTC m=+2881.650605207" watchObservedRunningTime="2026-01-26 16:53:29.993087508 +0000 UTC m=+2885.154359767" Jan 26 16:53:30 crc kubenswrapper[4680]: I0126 16:53:30.529622 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:30 crc kubenswrapper[4680]: I0126 16:53:30.577407 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vxct"] Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.502839 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2vxct" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="registry-server" containerID="cri-o://0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e" gracePeriod=2 Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.949378 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.986724 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-catalog-content\") pod \"d157bf6b-c186-40de-a61c-9444cfa7e952\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.987029 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ss9h\" (UniqueName: \"kubernetes.io/projected/d157bf6b-c186-40de-a61c-9444cfa7e952-kube-api-access-9ss9h\") pod \"d157bf6b-c186-40de-a61c-9444cfa7e952\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.987277 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-utilities\") pod \"d157bf6b-c186-40de-a61c-9444cfa7e952\" (UID: \"d157bf6b-c186-40de-a61c-9444cfa7e952\") " Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.987992 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-utilities" (OuterVolumeSpecName: "utilities") pod "d157bf6b-c186-40de-a61c-9444cfa7e952" (UID: "d157bf6b-c186-40de-a61c-9444cfa7e952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:53:32 crc kubenswrapper[4680]: I0126 16:53:32.988643 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.001447 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d157bf6b-c186-40de-a61c-9444cfa7e952-kube-api-access-9ss9h" (OuterVolumeSpecName: "kube-api-access-9ss9h") pod "d157bf6b-c186-40de-a61c-9444cfa7e952" (UID: "d157bf6b-c186-40de-a61c-9444cfa7e952"). InnerVolumeSpecName "kube-api-access-9ss9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.018288 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d157bf6b-c186-40de-a61c-9444cfa7e952" (UID: "d157bf6b-c186-40de-a61c-9444cfa7e952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.089960 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d157bf6b-c186-40de-a61c-9444cfa7e952-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.090535 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ss9h\" (UniqueName: \"kubernetes.io/projected/d157bf6b-c186-40de-a61c-9444cfa7e952-kube-api-access-9ss9h\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.513724 4680 generic.go:334] "Generic (PLEG): container finished" podID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerID="0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e" exitCode=0 Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.513769 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerDied","Data":"0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e"} Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.513804 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vxct" event={"ID":"d157bf6b-c186-40de-a61c-9444cfa7e952","Type":"ContainerDied","Data":"bfae924b2f2fdeebe305c60dbd02bb2a345089bdf556de34a3b22b28c05941c8"} Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.513824 4680 scope.go:117] "RemoveContainer" containerID="0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.513847 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vxct" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.537741 4680 scope.go:117] "RemoveContainer" containerID="e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.547543 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vxct"] Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.558158 4680 scope.go:117] "RemoveContainer" containerID="33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.560813 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vxct"] Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.609199 4680 scope.go:117] "RemoveContainer" containerID="0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e" Jan 26 16:53:33 crc kubenswrapper[4680]: E0126 16:53:33.609700 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e\": container with ID starting with 0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e not found: ID does not exist" containerID="0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.609746 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e"} err="failed to get container status \"0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e\": rpc error: code = NotFound desc = could not find container \"0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e\": container with ID starting with 0af9ffaf8b8246c7107722be948d42d0f932c57ca6db2f6dfd173349cd40817e not found: ID does not exist" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.609772 4680 scope.go:117] "RemoveContainer" containerID="e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c" Jan 26 16:53:33 crc kubenswrapper[4680]: E0126 16:53:33.610017 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c\": container with ID starting with e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c not found: ID does not exist" containerID="e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.610052 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c"} err="failed to get container status \"e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c\": rpc error: code = NotFound desc = could not find container \"e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c\": container with ID starting with e7cb55ba81000ee80b9374cda846244884a63e9543b0080ca7175c42aef8360c not found: ID does not exist" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.610096 4680 scope.go:117] "RemoveContainer" containerID="33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76" Jan 26 16:53:33 crc kubenswrapper[4680]: E0126 16:53:33.610326 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76\": container with ID starting with 33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76 not found: ID does not exist" containerID="33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76" Jan 26 16:53:33 crc kubenswrapper[4680]: I0126 16:53:33.610361 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76"} err="failed to get container status \"33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76\": rpc error: code = NotFound desc = could not find container \"33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76\": container with ID starting with 33499fc71105d631d7f04a9d44552f2ed5d3bdaec667d7d68332fd29fef93b76 not found: ID does not exist" Jan 26 16:53:35 crc kubenswrapper[4680]: I0126 16:53:35.180019 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" path="/var/lib/kubelet/pods/d157bf6b-c186-40de-a61c-9444cfa7e952/volumes" Jan 26 16:53:46 crc kubenswrapper[4680]: I0126 16:53:46.980626 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:53:46 crc kubenswrapper[4680]: I0126 16:53:46.981207 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:53:46 crc kubenswrapper[4680]: I0126 16:53:46.981258 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:53:46 crc kubenswrapper[4680]: I0126 16:53:46.982180 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea35982f5bc27f6b837d95837935b9694ae238dc68bb27db0722d6c275437a39"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:53:46 crc kubenswrapper[4680]: I0126 16:53:46.982239 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://ea35982f5bc27f6b837d95837935b9694ae238dc68bb27db0722d6c275437a39" gracePeriod=600 Jan 26 16:53:47 crc kubenswrapper[4680]: I0126 16:53:47.656471 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="ea35982f5bc27f6b837d95837935b9694ae238dc68bb27db0722d6c275437a39" exitCode=0 Jan 26 16:53:47 crc kubenswrapper[4680]: I0126 16:53:47.656603 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"ea35982f5bc27f6b837d95837935b9694ae238dc68bb27db0722d6c275437a39"} Jan 26 16:53:47 crc kubenswrapper[4680]: I0126 16:53:47.657199 4680 scope.go:117] "RemoveContainer" containerID="14191a4903536bfb7728bd37efde7c42c912c54748a86aaa41899a2ba9bca413" Jan 26 16:53:48 crc kubenswrapper[4680]: I0126 16:53:48.668718 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d"} Jan 26 16:56:16 crc kubenswrapper[4680]: I0126 16:56:16.980914 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:56:16 crc kubenswrapper[4680]: I0126 16:56:16.981500 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:56:35 crc kubenswrapper[4680]: I0126 16:56:35.112703 4680 generic.go:334] "Generic (PLEG): container finished" podID="6862087d-7d6e-4bd1-b94f-3f79dc142b7d" containerID="85835300fe9db128367f09950c8db563a4348b8e55ab758c5ce510d1e186ebdb" exitCode=0 Jan 26 16:56:35 crc kubenswrapper[4680]: I0126 16:56:35.112776 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" event={"ID":"6862087d-7d6e-4bd1-b94f-3f79dc142b7d","Type":"ContainerDied","Data":"85835300fe9db128367f09950c8db563a4348b8e55ab758c5ce510d1e186ebdb"} Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.553312 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.703827 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ssh-key-openstack-edpm-ipam\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.703941 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-1\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.703989 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-2\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.704022 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-inventory\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.704093 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5rtk\" (UniqueName: \"kubernetes.io/projected/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-kube-api-access-k5rtk\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.704123 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-0\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.704293 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-telemetry-combined-ca-bundle\") pod \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\" (UID: \"6862087d-7d6e-4bd1-b94f-3f79dc142b7d\") " Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.709715 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.709851 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-kube-api-access-k5rtk" (OuterVolumeSpecName: "kube-api-access-k5rtk") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "kube-api-access-k5rtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.733212 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.734190 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.735797 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-inventory" (OuterVolumeSpecName: "inventory") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.739023 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.742337 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6862087d-7d6e-4bd1-b94f-3f79dc142b7d" (UID: "6862087d-7d6e-4bd1-b94f-3f79dc142b7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806398 4680 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806434 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806442 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806452 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806463 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806472 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5rtk\" (UniqueName: \"kubernetes.io/projected/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-kube-api-access-k5rtk\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:36 crc kubenswrapper[4680]: I0126 16:56:36.806481 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6862087d-7d6e-4bd1-b94f-3f79dc142b7d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:37 crc kubenswrapper[4680]: I0126 16:56:37.130428 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" event={"ID":"6862087d-7d6e-4bd1-b94f-3f79dc142b7d","Type":"ContainerDied","Data":"327159930de11eecb471fb2f46b7da10e0ecbd5c1ba5d13d426859a0e4c849d4"} Jan 26 16:56:37 crc kubenswrapper[4680]: I0126 16:56:37.130478 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="327159930de11eecb471fb2f46b7da10e0ecbd5c1ba5d13d426859a0e4c849d4" Jan 26 16:56:37 crc kubenswrapper[4680]: I0126 16:56:37.130496 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tp2rn" Jan 26 16:56:46 crc kubenswrapper[4680]: I0126 16:56:46.980771 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:56:46 crc kubenswrapper[4680]: I0126 16:56:46.981220 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:57:16 crc kubenswrapper[4680]: I0126 16:57:16.980763 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:57:16 crc kubenswrapper[4680]: I0126 16:57:16.982785 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:57:16 crc kubenswrapper[4680]: I0126 16:57:16.982937 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 16:57:16 crc kubenswrapper[4680]: I0126 16:57:16.984491 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:57:16 crc kubenswrapper[4680]: I0126 16:57:16.984705 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" gracePeriod=600 Jan 26 16:57:17 crc kubenswrapper[4680]: E0126 16:57:17.108851 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:57:17 crc kubenswrapper[4680]: I0126 16:57:17.517644 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" exitCode=0 Jan 26 16:57:17 crc kubenswrapper[4680]: I0126 16:57:17.517716 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d"} Jan 26 16:57:17 crc kubenswrapper[4680]: I0126 16:57:17.517983 4680 scope.go:117] "RemoveContainer" containerID="ea35982f5bc27f6b837d95837935b9694ae238dc68bb27db0722d6c275437a39" Jan 26 16:57:17 crc kubenswrapper[4680]: I0126 16:57:17.518470 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:57:17 crc kubenswrapper[4680]: E0126 16:57:17.518761 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.142156 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 26 16:57:23 crc kubenswrapper[4680]: E0126 16:57:23.143143 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="extract-content" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.143159 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="extract-content" Jan 26 16:57:23 crc kubenswrapper[4680]: E0126 16:57:23.143180 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="extract-utilities" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.143375 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="extract-utilities" Jan 26 16:57:23 crc kubenswrapper[4680]: E0126 16:57:23.143390 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6862087d-7d6e-4bd1-b94f-3f79dc142b7d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.143399 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6862087d-7d6e-4bd1-b94f-3f79dc142b7d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:57:23 crc kubenswrapper[4680]: E0126 16:57:23.143415 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="registry-server" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.143424 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="registry-server" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.143642 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d157bf6b-c186-40de-a61c-9444cfa7e952" containerName="registry-server" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.143665 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6862087d-7d6e-4bd1-b94f-3f79dc142b7d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.144643 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.146743 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.150435 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.150702 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pzhsj" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.151889 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.154994 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226049 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnzn4\" (UniqueName: \"kubernetes.io/projected/4a678bad-96c4-45a4-8f56-51b4763655b1-kube-api-access-gnzn4\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226397 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226438 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226529 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226584 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226724 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226797 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.226894 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.227015 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329339 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnzn4\" (UniqueName: \"kubernetes.io/projected/4a678bad-96c4-45a4-8f56-51b4763655b1-kube-api-access-gnzn4\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329397 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329428 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329491 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329525 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329553 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329580 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329644 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.329682 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.330676 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.330875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.330890 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.331975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.332448 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.337542 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.339250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.347680 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.350227 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnzn4\" (UniqueName: \"kubernetes.io/projected/4a678bad-96c4-45a4-8f56-51b4763655b1-kube-api-access-gnzn4\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.363374 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:23 crc kubenswrapper[4680]: I0126 16:57:23.474233 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 16:57:24 crc kubenswrapper[4680]: I0126 16:57:24.018027 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:57:24 crc kubenswrapper[4680]: I0126 16:57:24.023409 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 26 16:57:24 crc kubenswrapper[4680]: I0126 16:57:24.596876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"4a678bad-96c4-45a4-8f56-51b4763655b1","Type":"ContainerStarted","Data":"b7d35b33f2c73779c7785e17a40ae39b92a98fb8cf55d220b6958968a912ee3f"} Jan 26 16:57:28 crc kubenswrapper[4680]: I0126 16:57:28.170140 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:57:28 crc kubenswrapper[4680]: E0126 16:57:28.170876 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:57:40 crc kubenswrapper[4680]: I0126 16:57:40.170059 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:57:40 crc kubenswrapper[4680]: E0126 16:57:40.171413 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:57:54 crc kubenswrapper[4680]: I0126 16:57:54.170216 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:57:54 crc kubenswrapper[4680]: E0126 16:57:54.170998 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:58:06 crc kubenswrapper[4680]: I0126 16:58:06.171176 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:58:06 crc kubenswrapper[4680]: E0126 16:58:06.171911 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:58:11 crc kubenswrapper[4680]: E0126 16:58:11.351742 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:58:11 crc kubenswrapper[4680]: E0126 16:58:11.352320 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 26 16:58:11 crc kubenswrapper[4680]: E0126 16:58:11.433990 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.73:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnzn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(4a678bad-96c4-45a4-8f56-51b4763655b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:58:11 crc kubenswrapper[4680]: E0126 16:58:11.435293 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="4a678bad-96c4-45a4-8f56-51b4763655b1" Jan 26 16:58:12 crc kubenswrapper[4680]: E0126 16:58:12.037847 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="4a678bad-96c4-45a4-8f56-51b4763655b1" Jan 26 16:58:18 crc kubenswrapper[4680]: I0126 16:58:18.170340 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:58:18 crc kubenswrapper[4680]: E0126 16:58:18.171190 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:58:25 crc kubenswrapper[4680]: I0126 16:58:25.409753 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 16:58:28 crc kubenswrapper[4680]: I0126 16:58:28.174641 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"4a678bad-96c4-45a4-8f56-51b4763655b1","Type":"ContainerStarted","Data":"a499704ab2790a08e46423f2c522171c084e470fc7e32e0331fd4392da2966c1"} Jan 26 16:58:28 crc kubenswrapper[4680]: I0126 16:58:28.199575 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.811697848 podStartE2EDuration="1m6.199553806s" podCreationTimestamp="2026-01-26 16:57:22 +0000 UTC" firstStartedPulling="2026-01-26 16:57:24.017762106 +0000 UTC m=+3119.179034375" lastFinishedPulling="2026-01-26 16:58:25.405618074 +0000 UTC m=+3180.566890333" observedRunningTime="2026-01-26 16:58:28.190592814 +0000 UTC m=+3183.351865093" watchObservedRunningTime="2026-01-26 16:58:28.199553806 +0000 UTC m=+3183.360826075" Jan 26 16:58:30 crc kubenswrapper[4680]: I0126 16:58:30.169641 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:58:30 crc kubenswrapper[4680]: E0126 16:58:30.170230 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.172982 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:58:41 crc kubenswrapper[4680]: E0126 16:58:41.173910 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.452970 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsbwl"] Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.454907 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.476200 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsbwl"] Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.554230 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-utilities\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.554814 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlfq\" (UniqueName: \"kubernetes.io/projected/777e8b1a-85e1-4ce0-8ce6-6894b025d987-kube-api-access-gmlfq\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.554908 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-catalog-content\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.656823 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-utilities\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.657005 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmlfq\" (UniqueName: \"kubernetes.io/projected/777e8b1a-85e1-4ce0-8ce6-6894b025d987-kube-api-access-gmlfq\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.657044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-catalog-content\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.657529 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-utilities\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.657579 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-catalog-content\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.681894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmlfq\" (UniqueName: \"kubernetes.io/projected/777e8b1a-85e1-4ce0-8ce6-6894b025d987-kube-api-access-gmlfq\") pod \"community-operators-vsbwl\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:41 crc kubenswrapper[4680]: I0126 16:58:41.784104 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:42 crc kubenswrapper[4680]: I0126 16:58:42.516663 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsbwl"] Jan 26 16:58:43 crc kubenswrapper[4680]: I0126 16:58:43.309314 4680 generic.go:334] "Generic (PLEG): container finished" podID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerID="855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0" exitCode=0 Jan 26 16:58:43 crc kubenswrapper[4680]: I0126 16:58:43.309367 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerDied","Data":"855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0"} Jan 26 16:58:43 crc kubenswrapper[4680]: I0126 16:58:43.309593 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerStarted","Data":"b99ebb2504cab47377a0e4248255f5fb87abb173beee63db14036fe8a1085110"} Jan 26 16:58:44 crc kubenswrapper[4680]: I0126 16:58:44.782818 4680 scope.go:117] "RemoveContainer" containerID="c405d025a3bb8844719b37cca24c35f5d89a4fd047bfb3838373ea3130a7a877" Jan 26 16:58:44 crc kubenswrapper[4680]: I0126 16:58:44.824996 4680 scope.go:117] "RemoveContainer" containerID="4aa82d2d9d21b811dbacd1ea660d7b756e52e7fcc7d396d89af328e85c7ccd73" Jan 26 16:58:44 crc kubenswrapper[4680]: I0126 16:58:44.848156 4680 scope.go:117] "RemoveContainer" containerID="59f3e17e2fd10cf4ac68cf16ab27cbbed8000224f30408a043bbe3aa4fe67e80" Jan 26 16:58:45 crc kubenswrapper[4680]: I0126 16:58:45.326733 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerStarted","Data":"8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b"} Jan 26 16:58:46 crc kubenswrapper[4680]: I0126 16:58:46.338350 4680 generic.go:334] "Generic (PLEG): container finished" podID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerID="8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b" exitCode=0 Jan 26 16:58:46 crc kubenswrapper[4680]: I0126 16:58:46.339203 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerDied","Data":"8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b"} Jan 26 16:58:47 crc kubenswrapper[4680]: I0126 16:58:47.354229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerStarted","Data":"d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068"} Jan 26 16:58:47 crc kubenswrapper[4680]: I0126 16:58:47.385768 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsbwl" podStartSLOduration=2.718764303 podStartE2EDuration="6.385749098s" podCreationTimestamp="2026-01-26 16:58:41 +0000 UTC" firstStartedPulling="2026-01-26 16:58:43.310953219 +0000 UTC m=+3198.472225488" lastFinishedPulling="2026-01-26 16:58:46.977938014 +0000 UTC m=+3202.139210283" observedRunningTime="2026-01-26 16:58:47.376303813 +0000 UTC m=+3202.537576082" watchObservedRunningTime="2026-01-26 16:58:47.385749098 +0000 UTC m=+3202.547021387" Jan 26 16:58:51 crc kubenswrapper[4680]: I0126 16:58:51.785130 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:51 crc kubenswrapper[4680]: I0126 16:58:51.785683 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:51 crc kubenswrapper[4680]: I0126 16:58:51.830428 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:52 crc kubenswrapper[4680]: I0126 16:58:52.450240 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:52 crc kubenswrapper[4680]: I0126 16:58:52.503488 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsbwl"] Jan 26 16:58:54 crc kubenswrapper[4680]: I0126 16:58:54.170170 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:58:54 crc kubenswrapper[4680]: E0126 16:58:54.171004 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:58:54 crc kubenswrapper[4680]: I0126 16:58:54.421462 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vsbwl" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="registry-server" containerID="cri-o://d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068" gracePeriod=2 Jan 26 16:58:54 crc kubenswrapper[4680]: I0126 16:58:54.925161 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.034920 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-catalog-content\") pod \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.035089 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmlfq\" (UniqueName: \"kubernetes.io/projected/777e8b1a-85e1-4ce0-8ce6-6894b025d987-kube-api-access-gmlfq\") pod \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.035122 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-utilities\") pod \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\" (UID: \"777e8b1a-85e1-4ce0-8ce6-6894b025d987\") " Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.036350 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-utilities" (OuterVolumeSpecName: "utilities") pod "777e8b1a-85e1-4ce0-8ce6-6894b025d987" (UID: "777e8b1a-85e1-4ce0-8ce6-6894b025d987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.049605 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777e8b1a-85e1-4ce0-8ce6-6894b025d987-kube-api-access-gmlfq" (OuterVolumeSpecName: "kube-api-access-gmlfq") pod "777e8b1a-85e1-4ce0-8ce6-6894b025d987" (UID: "777e8b1a-85e1-4ce0-8ce6-6894b025d987"). InnerVolumeSpecName "kube-api-access-gmlfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.097332 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "777e8b1a-85e1-4ce0-8ce6-6894b025d987" (UID: "777e8b1a-85e1-4ce0-8ce6-6894b025d987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.136901 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.136939 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmlfq\" (UniqueName: \"kubernetes.io/projected/777e8b1a-85e1-4ce0-8ce6-6894b025d987-kube-api-access-gmlfq\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.136950 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777e8b1a-85e1-4ce0-8ce6-6894b025d987-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.431710 4680 generic.go:334] "Generic (PLEG): container finished" podID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerID="d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068" exitCode=0 Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.431984 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerDied","Data":"d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068"} Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.431988 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsbwl" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.432013 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsbwl" event={"ID":"777e8b1a-85e1-4ce0-8ce6-6894b025d987","Type":"ContainerDied","Data":"b99ebb2504cab47377a0e4248255f5fb87abb173beee63db14036fe8a1085110"} Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.432048 4680 scope.go:117] "RemoveContainer" containerID="d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.457196 4680 scope.go:117] "RemoveContainer" containerID="8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.458058 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsbwl"] Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.469777 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vsbwl"] Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.478154 4680 scope.go:117] "RemoveContainer" containerID="855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.517748 4680 scope.go:117] "RemoveContainer" containerID="d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068" Jan 26 16:58:55 crc kubenswrapper[4680]: E0126 16:58:55.518210 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068\": container with ID starting with d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068 not found: ID does not exist" containerID="d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.518264 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068"} err="failed to get container status \"d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068\": rpc error: code = NotFound desc = could not find container \"d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068\": container with ID starting with d599d04b5741e2bc65f2a5eec8813e70f2beb6c695f0f4a8ff43d7c3c35c1068 not found: ID does not exist" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.518291 4680 scope.go:117] "RemoveContainer" containerID="8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b" Jan 26 16:58:55 crc kubenswrapper[4680]: E0126 16:58:55.518713 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b\": container with ID starting with 8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b not found: ID does not exist" containerID="8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.518745 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b"} err="failed to get container status \"8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b\": rpc error: code = NotFound desc = could not find container \"8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b\": container with ID starting with 8b9decbb0637efc9c7e707d0c360f911a7ea8090ed45be1662aaed4ad12e7b1b not found: ID does not exist" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.518767 4680 scope.go:117] "RemoveContainer" containerID="855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0" Jan 26 16:58:55 crc kubenswrapper[4680]: E0126 16:58:55.519049 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0\": container with ID starting with 855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0 not found: ID does not exist" containerID="855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0" Jan 26 16:58:55 crc kubenswrapper[4680]: I0126 16:58:55.519095 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0"} err="failed to get container status \"855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0\": rpc error: code = NotFound desc = could not find container \"855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0\": container with ID starting with 855ddf9db1a0cde66d2dd070fadb3a20609a95d7ea67e08eda8aaa7c06f4e0c0 not found: ID does not exist" Jan 26 16:58:57 crc kubenswrapper[4680]: I0126 16:58:57.180040 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" path="/var/lib/kubelet/pods/777e8b1a-85e1-4ce0-8ce6-6894b025d987/volumes" Jan 26 16:59:09 crc kubenswrapper[4680]: I0126 16:59:09.169989 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:59:09 crc kubenswrapper[4680]: E0126 16:59:09.170781 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:59:21 crc kubenswrapper[4680]: I0126 16:59:21.170637 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:59:21 crc kubenswrapper[4680]: E0126 16:59:21.171573 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:59:36 crc kubenswrapper[4680]: I0126 16:59:36.170226 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:59:36 crc kubenswrapper[4680]: E0126 16:59:36.171962 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 16:59:49 crc kubenswrapper[4680]: I0126 16:59:49.169699 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 16:59:49 crc kubenswrapper[4680]: E0126 16:59:49.171500 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.174105 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg"] Jan 26 17:00:00 crc kubenswrapper[4680]: E0126 17:00:00.190232 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.191076 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4680]: E0126 17:00:00.191174 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="extract-utilities" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.191245 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="extract-utilities" Jan 26 17:00:00 crc kubenswrapper[4680]: E0126 17:00:00.191327 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="extract-content" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.191405 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="extract-content" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.191681 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="777e8b1a-85e1-4ce0-8ce6-6894b025d987" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.192461 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg"] Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.192651 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.196747 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.196911 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.379895 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55q9x\" (UniqueName: \"kubernetes.io/projected/3109f250-b24f-41e7-b633-21f6c63bdfae-kube-api-access-55q9x\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.380031 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3109f250-b24f-41e7-b633-21f6c63bdfae-secret-volume\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.380130 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3109f250-b24f-41e7-b633-21f6c63bdfae-config-volume\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.481428 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3109f250-b24f-41e7-b633-21f6c63bdfae-secret-volume\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.481780 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3109f250-b24f-41e7-b633-21f6c63bdfae-config-volume\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.481868 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55q9x\" (UniqueName: \"kubernetes.io/projected/3109f250-b24f-41e7-b633-21f6c63bdfae-kube-api-access-55q9x\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.482903 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3109f250-b24f-41e7-b633-21f6c63bdfae-config-volume\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.492689 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3109f250-b24f-41e7-b633-21f6c63bdfae-secret-volume\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.498040 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55q9x\" (UniqueName: \"kubernetes.io/projected/3109f250-b24f-41e7-b633-21f6c63bdfae-kube-api-access-55q9x\") pod \"collect-profiles-29490780-z25rg\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:00 crc kubenswrapper[4680]: I0126 17:00:00.527390 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:01 crc kubenswrapper[4680]: I0126 17:00:01.021908 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg"] Jan 26 17:00:01 crc kubenswrapper[4680]: I0126 17:00:01.939177 4680 generic.go:334] "Generic (PLEG): container finished" podID="3109f250-b24f-41e7-b633-21f6c63bdfae" containerID="d78818e1459ff8593e23585c7bc747d0aa6db47f037d232f71687cdd6c9c8270" exitCode=0 Jan 26 17:00:01 crc kubenswrapper[4680]: I0126 17:00:01.939251 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" event={"ID":"3109f250-b24f-41e7-b633-21f6c63bdfae","Type":"ContainerDied","Data":"d78818e1459ff8593e23585c7bc747d0aa6db47f037d232f71687cdd6c9c8270"} Jan 26 17:00:01 crc kubenswrapper[4680]: I0126 17:00:01.939496 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" event={"ID":"3109f250-b24f-41e7-b633-21f6c63bdfae","Type":"ContainerStarted","Data":"b2f2827141157e9194560dddb95bc96a0b6498c5cdf5881348999a23364581b9"} Jan 26 17:00:02 crc kubenswrapper[4680]: I0126 17:00:02.170312 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:00:02 crc kubenswrapper[4680]: E0126 17:00:02.170736 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.286289 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.439483 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3109f250-b24f-41e7-b633-21f6c63bdfae-secret-volume\") pod \"3109f250-b24f-41e7-b633-21f6c63bdfae\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.439734 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55q9x\" (UniqueName: \"kubernetes.io/projected/3109f250-b24f-41e7-b633-21f6c63bdfae-kube-api-access-55q9x\") pod \"3109f250-b24f-41e7-b633-21f6c63bdfae\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.439848 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3109f250-b24f-41e7-b633-21f6c63bdfae-config-volume\") pod \"3109f250-b24f-41e7-b633-21f6c63bdfae\" (UID: \"3109f250-b24f-41e7-b633-21f6c63bdfae\") " Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.441132 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3109f250-b24f-41e7-b633-21f6c63bdfae-config-volume" (OuterVolumeSpecName: "config-volume") pod "3109f250-b24f-41e7-b633-21f6c63bdfae" (UID: "3109f250-b24f-41e7-b633-21f6c63bdfae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.448306 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3109f250-b24f-41e7-b633-21f6c63bdfae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3109f250-b24f-41e7-b633-21f6c63bdfae" (UID: "3109f250-b24f-41e7-b633-21f6c63bdfae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.451178 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3109f250-b24f-41e7-b633-21f6c63bdfae-kube-api-access-55q9x" (OuterVolumeSpecName: "kube-api-access-55q9x") pod "3109f250-b24f-41e7-b633-21f6c63bdfae" (UID: "3109f250-b24f-41e7-b633-21f6c63bdfae"). InnerVolumeSpecName "kube-api-access-55q9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.542719 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3109f250-b24f-41e7-b633-21f6c63bdfae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.542760 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3109f250-b24f-41e7-b633-21f6c63bdfae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.542774 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55q9x\" (UniqueName: \"kubernetes.io/projected/3109f250-b24f-41e7-b633-21f6c63bdfae-kube-api-access-55q9x\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.956677 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" event={"ID":"3109f250-b24f-41e7-b633-21f6c63bdfae","Type":"ContainerDied","Data":"b2f2827141157e9194560dddb95bc96a0b6498c5cdf5881348999a23364581b9"} Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.956731 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f2827141157e9194560dddb95bc96a0b6498c5cdf5881348999a23364581b9" Jan 26 17:00:03 crc kubenswrapper[4680]: I0126 17:00:03.956746 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg" Jan 26 17:00:04 crc kubenswrapper[4680]: I0126 17:00:04.359120 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv"] Jan 26 17:00:04 crc kubenswrapper[4680]: I0126 17:00:04.367740 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-6wwcv"] Jan 26 17:00:05 crc kubenswrapper[4680]: I0126 17:00:05.179976 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfdcd66-b3be-4e6f-9700-91414b2926be" path="/var/lib/kubelet/pods/4bfdcd66-b3be-4e6f-9700-91414b2926be/volumes" Jan 26 17:00:15 crc kubenswrapper[4680]: I0126 17:00:15.176028 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:00:15 crc kubenswrapper[4680]: E0126 17:00:15.177726 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:00:28 crc kubenswrapper[4680]: I0126 17:00:28.169927 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:00:28 crc kubenswrapper[4680]: E0126 17:00:28.170813 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:00:41 crc kubenswrapper[4680]: I0126 17:00:41.169299 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:00:41 crc kubenswrapper[4680]: E0126 17:00:41.170258 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:00:44 crc kubenswrapper[4680]: I0126 17:00:44.974361 4680 scope.go:117] "RemoveContainer" containerID="2555271667f61abf5c36e4fe8ca6d8c1531e5ed19463e47954f6e1006eba917e" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.169983 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:00:52 crc kubenswrapper[4680]: E0126 17:00:52.170477 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.347116 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbmcj"] Jan 26 17:00:52 crc kubenswrapper[4680]: E0126 17:00:52.347621 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3109f250-b24f-41e7-b633-21f6c63bdfae" containerName="collect-profiles" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.347646 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3109f250-b24f-41e7-b633-21f6c63bdfae" containerName="collect-profiles" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.347940 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3109f250-b24f-41e7-b633-21f6c63bdfae" containerName="collect-profiles" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.351479 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.366876 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbmcj"] Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.528961 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-utilities\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.529756 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv47\" (UniqueName: \"kubernetes.io/projected/c876aeb2-affd-411c-8260-369eec8e989c-kube-api-access-fnv47\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.529953 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-catalog-content\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.632124 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-catalog-content\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.632229 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-utilities\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.632338 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnv47\" (UniqueName: \"kubernetes.io/projected/c876aeb2-affd-411c-8260-369eec8e989c-kube-api-access-fnv47\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.632737 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-catalog-content\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.632825 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-utilities\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.653748 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnv47\" (UniqueName: \"kubernetes.io/projected/c876aeb2-affd-411c-8260-369eec8e989c-kube-api-access-fnv47\") pod \"redhat-operators-gbmcj\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:52 crc kubenswrapper[4680]: I0126 17:00:52.677767 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:00:53 crc kubenswrapper[4680]: I0126 17:00:53.245034 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbmcj"] Jan 26 17:00:53 crc kubenswrapper[4680]: I0126 17:00:53.728527 4680 generic.go:334] "Generic (PLEG): container finished" podID="c876aeb2-affd-411c-8260-369eec8e989c" containerID="0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678" exitCode=0 Jan 26 17:00:53 crc kubenswrapper[4680]: I0126 17:00:53.728683 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerDied","Data":"0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678"} Jan 26 17:00:53 crc kubenswrapper[4680]: I0126 17:00:53.728844 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerStarted","Data":"0febb7eae907200cd289016d8d8391d757134e3cbe537482d2d700093786cf92"} Jan 26 17:00:55 crc kubenswrapper[4680]: I0126 17:00:55.757465 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerStarted","Data":"9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126"} Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.170608 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490781-dgmr6"] Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.172769 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.191898 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490781-dgmr6"] Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.314537 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-config-data\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.316346 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-fernet-keys\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.316481 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqt56\" (UniqueName: \"kubernetes.io/projected/7a7f26b5-da6a-4652-b07f-f2fa06181c54-kube-api-access-pqt56\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.316706 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-combined-ca-bundle\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.419250 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-config-data\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.419318 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-fernet-keys\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.419405 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqt56\" (UniqueName: \"kubernetes.io/projected/7a7f26b5-da6a-4652-b07f-f2fa06181c54-kube-api-access-pqt56\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.419516 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-combined-ca-bundle\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.426501 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-fernet-keys\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.427594 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-combined-ca-bundle\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.440599 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-config-data\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.441678 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqt56\" (UniqueName: \"kubernetes.io/projected/7a7f26b5-da6a-4652-b07f-f2fa06181c54-kube-api-access-pqt56\") pod \"keystone-cron-29490781-dgmr6\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.495244 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.802475 4680 generic.go:334] "Generic (PLEG): container finished" podID="c876aeb2-affd-411c-8260-369eec8e989c" containerID="9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126" exitCode=0 Jan 26 17:01:00 crc kubenswrapper[4680]: I0126 17:01:00.802839 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerDied","Data":"9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126"} Jan 26 17:01:01 crc kubenswrapper[4680]: I0126 17:01:01.161650 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490781-dgmr6"] Jan 26 17:01:01 crc kubenswrapper[4680]: I0126 17:01:01.812765 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-dgmr6" event={"ID":"7a7f26b5-da6a-4652-b07f-f2fa06181c54","Type":"ContainerStarted","Data":"a53e86a40413fc883fe471166ade0af2c49d996e805336f9e8bd60afe53b69cc"} Jan 26 17:01:02 crc kubenswrapper[4680]: I0126 17:01:02.822360 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-dgmr6" event={"ID":"7a7f26b5-da6a-4652-b07f-f2fa06181c54","Type":"ContainerStarted","Data":"17671bc45fea3273dfa54274124b2f0f2b9249370415abef204fbc39dd9b1eeb"} Jan 26 17:01:02 crc kubenswrapper[4680]: I0126 17:01:02.825955 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerStarted","Data":"bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64"} Jan 26 17:01:02 crc kubenswrapper[4680]: I0126 17:01:02.848633 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490781-dgmr6" podStartSLOduration=2.848611088 podStartE2EDuration="2.848611088s" podCreationTimestamp="2026-01-26 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:02.844879034 +0000 UTC m=+3338.006151313" watchObservedRunningTime="2026-01-26 17:01:02.848611088 +0000 UTC m=+3338.009883347" Jan 26 17:01:02 crc kubenswrapper[4680]: I0126 17:01:02.873431 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbmcj" podStartSLOduration=3.292697691 podStartE2EDuration="10.873405394s" podCreationTimestamp="2026-01-26 17:00:52 +0000 UTC" firstStartedPulling="2026-01-26 17:00:53.730660732 +0000 UTC m=+3328.891933001" lastFinishedPulling="2026-01-26 17:01:01.311368435 +0000 UTC m=+3336.472640704" observedRunningTime="2026-01-26 17:01:02.86469878 +0000 UTC m=+3338.025971059" watchObservedRunningTime="2026-01-26 17:01:02.873405394 +0000 UTC m=+3338.034677653" Jan 26 17:01:05 crc kubenswrapper[4680]: I0126 17:01:05.179223 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:01:05 crc kubenswrapper[4680]: E0126 17:01:05.179896 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:01:08 crc kubenswrapper[4680]: I0126 17:01:08.903894 4680 generic.go:334] "Generic (PLEG): container finished" podID="7a7f26b5-da6a-4652-b07f-f2fa06181c54" containerID="17671bc45fea3273dfa54274124b2f0f2b9249370415abef204fbc39dd9b1eeb" exitCode=0 Jan 26 17:01:08 crc kubenswrapper[4680]: I0126 17:01:08.904533 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-dgmr6" event={"ID":"7a7f26b5-da6a-4652-b07f-f2fa06181c54","Type":"ContainerDied","Data":"17671bc45fea3273dfa54274124b2f0f2b9249370415abef204fbc39dd9b1eeb"} Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.377600 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.485999 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-config-data\") pod \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.486346 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqt56\" (UniqueName: \"kubernetes.io/projected/7a7f26b5-da6a-4652-b07f-f2fa06181c54-kube-api-access-pqt56\") pod \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.486848 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-fernet-keys\") pod \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.487003 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-combined-ca-bundle\") pod \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\" (UID: \"7a7f26b5-da6a-4652-b07f-f2fa06181c54\") " Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.504354 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7a7f26b5-da6a-4652-b07f-f2fa06181c54" (UID: "7a7f26b5-da6a-4652-b07f-f2fa06181c54"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.525729 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7f26b5-da6a-4652-b07f-f2fa06181c54-kube-api-access-pqt56" (OuterVolumeSpecName: "kube-api-access-pqt56") pod "7a7f26b5-da6a-4652-b07f-f2fa06181c54" (UID: "7a7f26b5-da6a-4652-b07f-f2fa06181c54"). InnerVolumeSpecName "kube-api-access-pqt56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.551842 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a7f26b5-da6a-4652-b07f-f2fa06181c54" (UID: "7a7f26b5-da6a-4652-b07f-f2fa06181c54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.576835 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-config-data" (OuterVolumeSpecName: "config-data") pod "7a7f26b5-da6a-4652-b07f-f2fa06181c54" (UID: "7a7f26b5-da6a-4652-b07f-f2fa06181c54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.590109 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.590163 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.590180 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7f26b5-da6a-4652-b07f-f2fa06181c54-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.590194 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqt56\" (UniqueName: \"kubernetes.io/projected/7a7f26b5-da6a-4652-b07f-f2fa06181c54-kube-api-access-pqt56\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.982988 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-dgmr6" event={"ID":"7a7f26b5-da6a-4652-b07f-f2fa06181c54","Type":"ContainerDied","Data":"a53e86a40413fc883fe471166ade0af2c49d996e805336f9e8bd60afe53b69cc"} Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.983053 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a53e86a40413fc883fe471166ade0af2c49d996e805336f9e8bd60afe53b69cc" Jan 26 17:01:10 crc kubenswrapper[4680]: I0126 17:01:10.983147 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-dgmr6" Jan 26 17:01:12 crc kubenswrapper[4680]: I0126 17:01:12.678931 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:01:12 crc kubenswrapper[4680]: I0126 17:01:12.679427 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:01:13 crc kubenswrapper[4680]: I0126 17:01:13.863467 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gbmcj" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="registry-server" probeResult="failure" output=< Jan 26 17:01:13 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:01:13 crc kubenswrapper[4680]: > Jan 26 17:01:20 crc kubenswrapper[4680]: I0126 17:01:20.170759 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:01:20 crc kubenswrapper[4680]: E0126 17:01:20.172687 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:01:23 crc kubenswrapper[4680]: I0126 17:01:23.727790 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gbmcj" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="registry-server" probeResult="failure" output=< Jan 26 17:01:23 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:01:23 crc kubenswrapper[4680]: > Jan 26 17:01:32 crc kubenswrapper[4680]: I0126 17:01:32.170687 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:01:32 crc kubenswrapper[4680]: E0126 17:01:32.172097 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:01:32 crc kubenswrapper[4680]: I0126 17:01:32.726760 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:01:32 crc kubenswrapper[4680]: I0126 17:01:32.773942 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:01:32 crc kubenswrapper[4680]: I0126 17:01:32.967286 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbmcj"] Jan 26 17:01:34 crc kubenswrapper[4680]: I0126 17:01:34.195018 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbmcj" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="registry-server" containerID="cri-o://bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64" gracePeriod=2 Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.106420 4680 trace.go:236] Trace[1013855793]: "Calculate volume metrics of mcd-auth-proxy-config for pod openshift-machine-config-operator/machine-config-daemon-qr4fm" (26-Jan-2026 17:01:33.971) (total time: 1134ms): Jan 26 17:01:35 crc kubenswrapper[4680]: Trace[1013855793]: [1.134011827s] [1.134011827s] END Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.567916 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.632405 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-utilities\") pod \"c876aeb2-affd-411c-8260-369eec8e989c\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.632491 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnv47\" (UniqueName: \"kubernetes.io/projected/c876aeb2-affd-411c-8260-369eec8e989c-kube-api-access-fnv47\") pod \"c876aeb2-affd-411c-8260-369eec8e989c\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.632678 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-catalog-content\") pod \"c876aeb2-affd-411c-8260-369eec8e989c\" (UID: \"c876aeb2-affd-411c-8260-369eec8e989c\") " Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.635941 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-utilities" (OuterVolumeSpecName: "utilities") pod "c876aeb2-affd-411c-8260-369eec8e989c" (UID: "c876aeb2-affd-411c-8260-369eec8e989c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.638739 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c876aeb2-affd-411c-8260-369eec8e989c-kube-api-access-fnv47" (OuterVolumeSpecName: "kube-api-access-fnv47") pod "c876aeb2-affd-411c-8260-369eec8e989c" (UID: "c876aeb2-affd-411c-8260-369eec8e989c"). InnerVolumeSpecName "kube-api-access-fnv47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.734670 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.734703 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnv47\" (UniqueName: \"kubernetes.io/projected/c876aeb2-affd-411c-8260-369eec8e989c-kube-api-access-fnv47\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.777419 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c876aeb2-affd-411c-8260-369eec8e989c" (UID: "c876aeb2-affd-411c-8260-369eec8e989c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:01:35 crc kubenswrapper[4680]: I0126 17:01:35.836649 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c876aeb2-affd-411c-8260-369eec8e989c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.216750 4680 generic.go:334] "Generic (PLEG): container finished" podID="c876aeb2-affd-411c-8260-369eec8e989c" containerID="bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64" exitCode=0 Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.216840 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmcj" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.216835 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerDied","Data":"bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64"} Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.217242 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmcj" event={"ID":"c876aeb2-affd-411c-8260-369eec8e989c","Type":"ContainerDied","Data":"0febb7eae907200cd289016d8d8391d757134e3cbe537482d2d700093786cf92"} Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.217274 4680 scope.go:117] "RemoveContainer" containerID="bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.247405 4680 scope.go:117] "RemoveContainer" containerID="9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.260183 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbmcj"] Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.270235 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbmcj"] Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.276158 4680 scope.go:117] "RemoveContainer" containerID="0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.322143 4680 scope.go:117] "RemoveContainer" containerID="bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64" Jan 26 17:01:36 crc kubenswrapper[4680]: E0126 17:01:36.322870 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64\": container with ID starting with bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64 not found: ID does not exist" containerID="bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.322950 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64"} err="failed to get container status \"bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64\": rpc error: code = NotFound desc = could not find container \"bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64\": container with ID starting with bebcf976333d527bfff76909e743cdb7fe2d545933cab53bf746d34979e58f64 not found: ID does not exist" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.322988 4680 scope.go:117] "RemoveContainer" containerID="9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126" Jan 26 17:01:36 crc kubenswrapper[4680]: E0126 17:01:36.323525 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126\": container with ID starting with 9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126 not found: ID does not exist" containerID="9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.323569 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126"} err="failed to get container status \"9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126\": rpc error: code = NotFound desc = could not find container \"9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126\": container with ID starting with 9f8d80785b59e9d453d2a30ef6e9d56a75b6bc6e6b7332195c21be8d0b746126 not found: ID does not exist" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.323596 4680 scope.go:117] "RemoveContainer" containerID="0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678" Jan 26 17:01:36 crc kubenswrapper[4680]: E0126 17:01:36.324014 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678\": container with ID starting with 0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678 not found: ID does not exist" containerID="0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678" Jan 26 17:01:36 crc kubenswrapper[4680]: I0126 17:01:36.324045 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678"} err="failed to get container status \"0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678\": rpc error: code = NotFound desc = could not find container \"0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678\": container with ID starting with 0ed58e201c37c784d6fde4bf11a491c6dbeaa45da3a0a6a7cd57dbc4b3865678 not found: ID does not exist" Jan 26 17:01:37 crc kubenswrapper[4680]: I0126 17:01:37.183485 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c876aeb2-affd-411c-8260-369eec8e989c" path="/var/lib/kubelet/pods/c876aeb2-affd-411c-8260-369eec8e989c/volumes" Jan 26 17:01:45 crc kubenswrapper[4680]: I0126 17:01:45.174894 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:01:45 crc kubenswrapper[4680]: E0126 17:01:45.175787 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:01:59 crc kubenswrapper[4680]: I0126 17:01:59.170417 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:01:59 crc kubenswrapper[4680]: E0126 17:01:59.171226 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:02:10 crc kubenswrapper[4680]: I0126 17:02:10.169764 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:02:10 crc kubenswrapper[4680]: E0126 17:02:10.170629 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:02:23 crc kubenswrapper[4680]: I0126 17:02:23.175217 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:02:23 crc kubenswrapper[4680]: I0126 17:02:23.648835 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"4dd1758f3b3e8ef5be3144fdb077bdb31ce58319736f70937b9eccf4c4cc07ec"} Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.717336 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9qpl4"] Jan 26 17:02:37 crc kubenswrapper[4680]: E0126 17:02:37.720802 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="extract-utilities" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.720837 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="extract-utilities" Jan 26 17:02:37 crc kubenswrapper[4680]: E0126 17:02:37.720863 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="extract-content" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.720871 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="extract-content" Jan 26 17:02:37 crc kubenswrapper[4680]: E0126 17:02:37.720888 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="registry-server" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.720896 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="registry-server" Jan 26 17:02:37 crc kubenswrapper[4680]: E0126 17:02:37.720919 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7f26b5-da6a-4652-b07f-f2fa06181c54" containerName="keystone-cron" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.720927 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7f26b5-da6a-4652-b07f-f2fa06181c54" containerName="keystone-cron" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.721169 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c876aeb2-affd-411c-8260-369eec8e989c" containerName="registry-server" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.721202 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7f26b5-da6a-4652-b07f-f2fa06181c54" containerName="keystone-cron" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.722814 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.772424 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qpl4"] Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.775321 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-utilities\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.775487 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-catalog-content\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.775549 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gv76\" (UniqueName: \"kubernetes.io/projected/893777d7-42f1-47aa-b76c-04cc956c669d-kube-api-access-8gv76\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.877015 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-catalog-content\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.877435 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gv76\" (UniqueName: \"kubernetes.io/projected/893777d7-42f1-47aa-b76c-04cc956c669d-kube-api-access-8gv76\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.877503 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-utilities\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.880092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-catalog-content\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.880690 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-utilities\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:37 crc kubenswrapper[4680]: I0126 17:02:37.910530 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gv76\" (UniqueName: \"kubernetes.io/projected/893777d7-42f1-47aa-b76c-04cc956c669d-kube-api-access-8gv76\") pod \"certified-operators-9qpl4\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:38 crc kubenswrapper[4680]: I0126 17:02:38.050016 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:39 crc kubenswrapper[4680]: I0126 17:02:39.978580 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qpl4"] Jan 26 17:02:40 crc kubenswrapper[4680]: W0126 17:02:40.032673 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893777d7_42f1_47aa_b76c_04cc956c669d.slice/crio-a2252de03f64203991a9487d9bf85d44a4ab6a28b0d48aff58a313a75183cc8c WatchSource:0}: Error finding container a2252de03f64203991a9487d9bf85d44a4ab6a28b0d48aff58a313a75183cc8c: Status 404 returned error can't find the container with id a2252de03f64203991a9487d9bf85d44a4ab6a28b0d48aff58a313a75183cc8c Jan 26 17:02:40 crc kubenswrapper[4680]: I0126 17:02:40.831137 4680 generic.go:334] "Generic (PLEG): container finished" podID="893777d7-42f1-47aa-b76c-04cc956c669d" containerID="b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc" exitCode=0 Jan 26 17:02:40 crc kubenswrapper[4680]: I0126 17:02:40.831206 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerDied","Data":"b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc"} Jan 26 17:02:40 crc kubenswrapper[4680]: I0126 17:02:40.831534 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerStarted","Data":"a2252de03f64203991a9487d9bf85d44a4ab6a28b0d48aff58a313a75183cc8c"} Jan 26 17:02:40 crc kubenswrapper[4680]: I0126 17:02:40.857618 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:02:42 crc kubenswrapper[4680]: I0126 17:02:42.850911 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerStarted","Data":"6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf"} Jan 26 17:02:46 crc kubenswrapper[4680]: I0126 17:02:46.899758 4680 generic.go:334] "Generic (PLEG): container finished" podID="893777d7-42f1-47aa-b76c-04cc956c669d" containerID="6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf" exitCode=0 Jan 26 17:02:46 crc kubenswrapper[4680]: I0126 17:02:46.899836 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerDied","Data":"6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf"} Jan 26 17:02:48 crc kubenswrapper[4680]: I0126 17:02:48.920518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerStarted","Data":"ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6"} Jan 26 17:02:48 crc kubenswrapper[4680]: I0126 17:02:48.976717 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9qpl4" podStartSLOduration=4.824548709 podStartE2EDuration="11.973785803s" podCreationTimestamp="2026-01-26 17:02:37 +0000 UTC" firstStartedPulling="2026-01-26 17:02:40.836038486 +0000 UTC m=+3435.997310755" lastFinishedPulling="2026-01-26 17:02:47.98527558 +0000 UTC m=+3443.146547849" observedRunningTime="2026-01-26 17:02:48.971157859 +0000 UTC m=+3444.132430128" watchObservedRunningTime="2026-01-26 17:02:48.973785803 +0000 UTC m=+3444.135058072" Jan 26 17:02:58 crc kubenswrapper[4680]: I0126 17:02:58.050877 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:58 crc kubenswrapper[4680]: I0126 17:02:58.051526 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:58 crc kubenswrapper[4680]: I0126 17:02:58.109732 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:59 crc kubenswrapper[4680]: I0126 17:02:59.054892 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:02:59 crc kubenswrapper[4680]: I0126 17:02:59.114106 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qpl4"] Jan 26 17:03:01 crc kubenswrapper[4680]: I0126 17:03:01.022875 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9qpl4" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="registry-server" containerID="cri-o://ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6" gracePeriod=2 Jan 26 17:03:01 crc kubenswrapper[4680]: I0126 17:03:01.945915 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.036389 4680 generic.go:334] "Generic (PLEG): container finished" podID="893777d7-42f1-47aa-b76c-04cc956c669d" containerID="ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6" exitCode=0 Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.037256 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerDied","Data":"ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6"} Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.037454 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qpl4" event={"ID":"893777d7-42f1-47aa-b76c-04cc956c669d","Type":"ContainerDied","Data":"a2252de03f64203991a9487d9bf85d44a4ab6a28b0d48aff58a313a75183cc8c"} Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.038287 4680 scope.go:117] "RemoveContainer" containerID="ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.038686 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qpl4" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.096591 4680 scope.go:117] "RemoveContainer" containerID="6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.126894 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-catalog-content\") pod \"893777d7-42f1-47aa-b76c-04cc956c669d\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.127137 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-utilities\") pod \"893777d7-42f1-47aa-b76c-04cc956c669d\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.127204 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gv76\" (UniqueName: \"kubernetes.io/projected/893777d7-42f1-47aa-b76c-04cc956c669d-kube-api-access-8gv76\") pod \"893777d7-42f1-47aa-b76c-04cc956c669d\" (UID: \"893777d7-42f1-47aa-b76c-04cc956c669d\") " Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.131191 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-utilities" (OuterVolumeSpecName: "utilities") pod "893777d7-42f1-47aa-b76c-04cc956c669d" (UID: "893777d7-42f1-47aa-b76c-04cc956c669d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.143526 4680 scope.go:117] "RemoveContainer" containerID="b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.182210 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893777d7-42f1-47aa-b76c-04cc956c669d-kube-api-access-8gv76" (OuterVolumeSpecName: "kube-api-access-8gv76") pod "893777d7-42f1-47aa-b76c-04cc956c669d" (UID: "893777d7-42f1-47aa-b76c-04cc956c669d"). InnerVolumeSpecName "kube-api-access-8gv76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.232236 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.232271 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gv76\" (UniqueName: \"kubernetes.io/projected/893777d7-42f1-47aa-b76c-04cc956c669d-kube-api-access-8gv76\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.260654 4680 scope.go:117] "RemoveContainer" containerID="ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6" Jan 26 17:03:02 crc kubenswrapper[4680]: E0126 17:03:02.268840 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6\": container with ID starting with ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6 not found: ID does not exist" containerID="ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.269348 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6"} err="failed to get container status \"ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6\": rpc error: code = NotFound desc = could not find container \"ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6\": container with ID starting with ab9074ca3712081889c4194fd1931de6b5193ebcdb62c3d2405260232f09d4c6 not found: ID does not exist" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.269471 4680 scope.go:117] "RemoveContainer" containerID="6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf" Jan 26 17:03:02 crc kubenswrapper[4680]: E0126 17:03:02.271631 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf\": container with ID starting with 6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf not found: ID does not exist" containerID="6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.271741 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf"} err="failed to get container status \"6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf\": rpc error: code = NotFound desc = could not find container \"6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf\": container with ID starting with 6609a8c93f96820400eda96722cb9bfeb42859c9b2ef1fc596ec62a5029b62cf not found: ID does not exist" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.271835 4680 scope.go:117] "RemoveContainer" containerID="b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc" Jan 26 17:03:02 crc kubenswrapper[4680]: E0126 17:03:02.272277 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc\": container with ID starting with b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc not found: ID does not exist" containerID="b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.272325 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc"} err="failed to get container status \"b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc\": rpc error: code = NotFound desc = could not find container \"b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc\": container with ID starting with b7c8c436675c61ca98ecc2086c85248ed91cfec895fa60c83ffc1e63546f2afc not found: ID does not exist" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.289443 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "893777d7-42f1-47aa-b76c-04cc956c669d" (UID: "893777d7-42f1-47aa-b76c-04cc956c669d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.334778 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893777d7-42f1-47aa-b76c-04cc956c669d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.381267 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qpl4"] Jan 26 17:03:02 crc kubenswrapper[4680]: I0126 17:03:02.393478 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9qpl4"] Jan 26 17:03:03 crc kubenswrapper[4680]: I0126 17:03:03.181924 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" path="/var/lib/kubelet/pods/893777d7-42f1-47aa-b76c-04cc956c669d/volumes" Jan 26 17:04:46 crc kubenswrapper[4680]: I0126 17:04:46.982120 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:04:46 crc kubenswrapper[4680]: I0126 17:04:46.984469 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:16 crc kubenswrapper[4680]: I0126 17:05:16.981735 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:05:16 crc kubenswrapper[4680]: I0126 17:05:16.983554 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:46 crc kubenswrapper[4680]: I0126 17:05:46.980996 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:05:46 crc kubenswrapper[4680]: I0126 17:05:46.981622 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:46 crc kubenswrapper[4680]: I0126 17:05:46.983011 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:05:46 crc kubenswrapper[4680]: I0126 17:05:46.984538 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dd1758f3b3e8ef5be3144fdb077bdb31ce58319736f70937b9eccf4c4cc07ec"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:05:46 crc kubenswrapper[4680]: I0126 17:05:46.984650 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://4dd1758f3b3e8ef5be3144fdb077bdb31ce58319736f70937b9eccf4c4cc07ec" gracePeriod=600 Jan 26 17:05:47 crc kubenswrapper[4680]: I0126 17:05:47.563038 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="4dd1758f3b3e8ef5be3144fdb077bdb31ce58319736f70937b9eccf4c4cc07ec" exitCode=0 Jan 26 17:05:47 crc kubenswrapper[4680]: I0126 17:05:47.563131 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"4dd1758f3b3e8ef5be3144fdb077bdb31ce58319736f70937b9eccf4c4cc07ec"} Jan 26 17:05:47 crc kubenswrapper[4680]: I0126 17:05:47.563545 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342"} Jan 26 17:05:47 crc kubenswrapper[4680]: I0126 17:05:47.563784 4680 scope.go:117] "RemoveContainer" containerID="d5a3829d7e4c397e03c5e554ff1cb836121846ed1e8a0e602394faf955c6877d" Jan 26 17:08:17 crc kubenswrapper[4680]: I0126 17:08:16.986301 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:08:17 crc kubenswrapper[4680]: I0126 17:08:17.012386 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:08:46 crc kubenswrapper[4680]: I0126 17:08:46.980820 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:08:46 crc kubenswrapper[4680]: I0126 17:08:46.981612 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:09:16 crc kubenswrapper[4680]: I0126 17:09:16.981683 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:09:16 crc kubenswrapper[4680]: I0126 17:09:16.988166 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:09:16 crc kubenswrapper[4680]: I0126 17:09:16.990595 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:09:16 crc kubenswrapper[4680]: I0126 17:09:16.993226 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:09:16 crc kubenswrapper[4680]: I0126 17:09:16.993392 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" gracePeriod=600 Jan 26 17:09:17 crc kubenswrapper[4680]: E0126 17:09:17.249115 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:09:17 crc kubenswrapper[4680]: I0126 17:09:17.506415 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" exitCode=0 Jan 26 17:09:17 crc kubenswrapper[4680]: I0126 17:09:17.506466 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342"} Jan 26 17:09:17 crc kubenswrapper[4680]: I0126 17:09:17.508783 4680 scope.go:117] "RemoveContainer" containerID="4dd1758f3b3e8ef5be3144fdb077bdb31ce58319736f70937b9eccf4c4cc07ec" Jan 26 17:09:17 crc kubenswrapper[4680]: I0126 17:09:17.510441 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:09:17 crc kubenswrapper[4680]: E0126 17:09:17.511169 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:09:28 crc kubenswrapper[4680]: I0126 17:09:28.171312 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:09:28 crc kubenswrapper[4680]: E0126 17:09:28.172614 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:09:42 crc kubenswrapper[4680]: I0126 17:09:42.170255 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:09:42 crc kubenswrapper[4680]: E0126 17:09:42.171107 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:09:54 crc kubenswrapper[4680]: I0126 17:09:54.170017 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:09:54 crc kubenswrapper[4680]: E0126 17:09:54.170867 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:10:07 crc kubenswrapper[4680]: I0126 17:10:07.169789 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:10:07 crc kubenswrapper[4680]: E0126 17:10:07.170550 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:10:19 crc kubenswrapper[4680]: I0126 17:10:19.172111 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:10:19 crc kubenswrapper[4680]: E0126 17:10:19.174976 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:10:35 crc kubenswrapper[4680]: I0126 17:10:35.172764 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:10:35 crc kubenswrapper[4680]: E0126 17:10:35.174233 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.489016 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbd5d"] Jan 26 17:10:41 crc kubenswrapper[4680]: E0126 17:10:41.491470 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="extract-content" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.491502 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="extract-content" Jan 26 17:10:41 crc kubenswrapper[4680]: E0126 17:10:41.491536 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="extract-utilities" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.491544 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="extract-utilities" Jan 26 17:10:41 crc kubenswrapper[4680]: E0126 17:10:41.491561 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="registry-server" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.491567 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="registry-server" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.492186 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="893777d7-42f1-47aa-b76c-04cc956c669d" containerName="registry-server" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.495176 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.640169 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-catalog-content\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.640471 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-utilities\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.640632 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7nr\" (UniqueName: \"kubernetes.io/projected/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-kube-api-access-pm7nr\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.741922 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7nr\" (UniqueName: \"kubernetes.io/projected/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-kube-api-access-pm7nr\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.742269 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-catalog-content\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.742311 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-utilities\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.745894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-utilities\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.746671 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-catalog-content\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.769025 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7nr\" (UniqueName: \"kubernetes.io/projected/c03dc150-d5dd-4cb8-8250-e2b2b95980dd-kube-api-access-pm7nr\") pod \"redhat-marketplace-jbd5d\" (UID: \"c03dc150-d5dd-4cb8-8250-e2b2b95980dd\") " pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.790849 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbd5d"] Jan 26 17:10:41 crc kubenswrapper[4680]: I0126 17:10:41.836860 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:10:43 crc kubenswrapper[4680]: I0126 17:10:43.828354 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:10:43 crc kubenswrapper[4680]: I0126 17:10:43.828371 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:10:44 crc kubenswrapper[4680]: I0126 17:10:44.700553 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" podUID="db8c5f93-fbaf-4f34-9214-ec7e463beb79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:44 crc kubenswrapper[4680]: I0126 17:10:44.700554 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" podUID="db8c5f93-fbaf-4f34-9214-ec7e463beb79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:44 crc kubenswrapper[4680]: I0126 17:10:44.764481 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:44 crc kubenswrapper[4680]: I0126 17:10:44.764968 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:44 crc kubenswrapper[4680]: I0126 17:10:44.764525 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:44 crc kubenswrapper[4680]: I0126 17:10:44.765306 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.174652 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.174709 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.174909 4680 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-s6kzf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.174927 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" podUID="8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.174939 4680 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.174961 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.299347 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" podUID="0ba4109b-0e34-4c97-884a-d70052bf8082" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341299 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341676 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341712 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341576 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341924 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341300 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" podUID="0ba4109b-0e34-4c97-884a-d70052bf8082" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.341891 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.442511 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.442949 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.443233 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.443294 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:45 crc kubenswrapper[4680]: I0126 17:10:45.826590 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-nqg55" podUID="11ded85a-b350-41a1-b9f2-f57901f116c5" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 26 17:10:47 crc kubenswrapper[4680]: I0126 17:10:47.330314 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:47 crc kubenswrapper[4680]: I0126 17:10:47.330676 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:47 crc kubenswrapper[4680]: I0126 17:10:47.330578 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:47 crc kubenswrapper[4680]: I0126 17:10:47.330735 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:47 crc kubenswrapper[4680]: I0126 17:10:47.852627 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zbpn8" podUID="8bc5951e-2f18-4454-9de8-03a295fe8e1a" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.041285 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" podUID="7619b024-3fab-49a5-abec-5b31e09a5c51" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.124347 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" podUID="e9eb4184-e77b-49c1-b4af-cae5dc77b953" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.124492 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" podUID="c4788302-e01e-485b-b716-a6db7a2ac272" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.124543 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" podUID="e9eb4184-e77b-49c1-b4af-cae5dc77b953" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.171971 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:10:48 crc kubenswrapper[4680]: E0126 17:10:48.173850 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.208361 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vpmf7" podUID="7619b024-3fab-49a5-abec-5b31e09a5c51" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.208363 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.208975 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" podUID="c4788302-e01e-485b-b716-a6db7a2ac272" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.291422 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podUID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.373288 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.373288 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" podUID="c9bc9b0e-b690-47c1-92ea-bea335fc0b41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.373569 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podUID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.455366 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" podUID="c9bc9b0e-b690-47c1-92ea-bea335fc0b41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.455365 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" podUID="bba916e9-436b-4c01-ba4c-2f758ed6d988" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.537760 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.620328 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" podUID="bba916e9-436b-4c01-ba4c-2f758ed6d988" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.620334 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" podUID="8bd876e3-9283-4de7-80b0-3c1787745bfb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.703262 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.703298 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" podUID="4ef5b147-3e74-4417-9c89-e0f33fc62eba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.785410 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" podUID="f79a7334-75ae-40a1-81c3-ce27e0567de9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.785410 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" podUID="8bd876e3-9283-4de7-80b0-3c1787745bfb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.785720 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" podUID="4ef5b147-3e74-4417-9c89-e0f33fc62eba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.867316 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" podUID="f79a7334-75ae-40a1-81c3-ce27e0567de9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.867342 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podUID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.951295 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:48 crc kubenswrapper[4680]: I0126 17:10:48.951310 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podUID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.035274 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.035364 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" podUID="bb8599b0-8155-440a-a0f5-505f73113a1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.035458 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" podUID="bb8599b0-8155-440a-a0f5-505f73113a1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.117375 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" podUID="505c1441-c509-4792-ac15-8b218143a69f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.117882 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" podUID="505c1441-c509-4792-ac15-8b218143a69f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.201621 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.202110 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.202132 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.202212 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.202144 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.202405 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.385300 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.385300 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.41:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.385472 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.727692 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="91d568ce-4c36-4722-bd9f-f3ad544a0e8d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.810592 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" podUID="c0e0de34-8f98-4db6-abf2-856f6477119e" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.71:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.810630 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="91d568ce-4c36-4722-bd9f-f3ad544a0e8d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.811092 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:49 crc kubenswrapper[4680]: I0126 17:10:49.811291 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:50 crc kubenswrapper[4680]: I0126 17:10:50.353225 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:50 crc kubenswrapper[4680]: I0126 17:10:50.353273 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:50 crc kubenswrapper[4680]: I0126 17:10:50.770129 4680 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-xr5vv container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.24:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:50 crc kubenswrapper[4680]: I0126 17:10:50.770498 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" podUID="89cb99e5-a352-468f-bcc6-a90442f0bd6b" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.24:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:50 crc kubenswrapper[4680]: I0126 17:10:50.892286 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" podUID="13d27b97-b926-4a78-991d-e969612ff055" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:50 crc kubenswrapper[4680]: I0126 17:10:50.892502 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" podUID="13d27b97-b926-4a78-991d-e969612ff055" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.167370 4680 patch_prober.go:28] interesting pod/console-6fc5c8f49-48gmj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.167462 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6fc5c8f49-48gmj" podUID="b07144ca-cc49-4f4f-9620-88ebbdffce43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.200021 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" podUID="0d273daa-c1c4-4746-9e28-abf5e15aa387" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.323745 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.497369 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podUID="ae5969bc-48f4-499f-9ca5-6858279a47d6" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.620279 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podUID="ae5969bc-48f4-499f-9ca5-6858279a47d6" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.620320 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.620331 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.620282 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:51 crc kubenswrapper[4680]: I0126 17:10:51.841925 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f43c12ef-d0d4-4ff9-802f-652e3e4188cc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.130369 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.133388 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.137325 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.137448 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.172303 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-jzg2h" podUID="a30260c8-eca8-456a-a94d-61839973f6ee" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.49:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.213428 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-jzg2h" podUID="a30260c8-eca8-456a-a94d-61839973f6ee" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.49:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.720510 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.824703 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.829135 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fxxq9" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.839639 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-fxxq9" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:10:52 crc kubenswrapper[4680]: I0126 17:10:52.839857 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.002423 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-x6xh2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.002422 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-x6xh2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.003351 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" podUID="937d1b38-2a29-4846-bb8c-7995c583ac89" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.003374 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" podUID="937d1b38-2a29-4846-bb8c-7995c583ac89" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.084404 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-mqkb5" podUID="71470be4-25d6-4dab-8fa6-3850938403e2" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.084548 4680 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.084558 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-mqkb5" podUID="71470be4-25d6-4dab-8fa6-3850938403e2" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.084606 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.131445 4680 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-bjnls container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.131575 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" podUID="f93ff197-4612-44d8-b67e-c98ae2906899" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.627475 4680 patch_prober.go:28] interesting pod/controller-manager-6d986fd6d8-rbc4h container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.628109 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" podUID="e7b8a972-ec6d-4501-80ed-cdcaba552029" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.632252 4680 patch_prober.go:28] interesting pod/controller-manager-6d986fd6d8-rbc4h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.632364 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" podUID="e7b8a972-ec6d-4501-80ed-cdcaba552029" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.670910 4680 patch_prober.go:28] interesting pod/route-controller-manager-798fbb5b47-4kwr8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.671004 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" podUID="41ae1a5a-420b-459d-bc28-071edd6dca3e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.671263 4680 patch_prober.go:28] interesting pod/route-controller-manager-798fbb5b47-4kwr8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.671477 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" podUID="41ae1a5a-420b-459d-bc28-071edd6dca3e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.727267 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.845632 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.845899 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:10:53 crc kubenswrapper[4680]: I0126 17:10:53.846428 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-cs6qq" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" probeResult="failure" output=< Jan 26 17:10:53 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:10:53 crc kubenswrapper[4680]: > Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.125640 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" podUID="51e3cde0-6a23-4d62-83ca-fc16415da2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.325876 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-cs6qq" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" probeResult="failure" output=< Jan 26 17:10:54 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:10:54 crc kubenswrapper[4680]: > Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.328474 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5qzzv" podUID="25be1e15-1f83-4ee7-82d4-9ffd6ff46f82" containerName="registry-server" probeResult="failure" output=< Jan 26 17:10:54 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:10:54 crc kubenswrapper[4680]: > Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.328485 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-42hb6" podUID="132d1f81-6236-4d81-a220-72d5bb914144" containerName="registry-server" probeResult="failure" output=< Jan 26 17:10:54 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:10:54 crc kubenswrapper[4680]: > Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.328744 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-42hb6" podUID="132d1f81-6236-4d81-a220-72d5bb914144" containerName="registry-server" probeResult="failure" output=< Jan 26 17:10:54 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:10:54 crc kubenswrapper[4680]: > Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.331104 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5qzzv" podUID="25be1e15-1f83-4ee7-82d4-9ffd6ff46f82" containerName="registry-server" probeResult="failure" output=< Jan 26 17:10:54 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:10:54 crc kubenswrapper[4680]: > Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.424852 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.424933 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.424975 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.425006 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.658367 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" podUID="db8c5f93-fbaf-4f34-9214-ec7e463beb79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.763849 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.763903 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.763956 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:54 crc kubenswrapper[4680]: I0126 17:10:54.763966 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.093164 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.093180 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.093289 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.094222 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.208407 4680 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-s6kzf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.208375 4680 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.208819 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" podUID="8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.208888 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413253 4680 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-s6kzf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413325 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413337 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" podUID="8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413276 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" podUID="0ba4109b-0e34-4c97-884a-d70052bf8082" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413385 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413367 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413398 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413440 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413711 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413736 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413783 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.413803 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.414002 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.414083 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.414094 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.415103 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"17a801c04d4b7461de21e6164a1c16bf3893870fe791f48d720bda63efbba26a"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.415195 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" containerID="cri-o://17a801c04d4b7461de21e6164a1c16bf3893870fe791f48d720bda63efbba26a" gracePeriod=30 Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.415335 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"5e48ce683c69f87c7bc43e37ec414f3de70e9c48a6544094a1bb4c120ba5b0c4"} pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.415418 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" containerID="cri-o://5e48ce683c69f87c7bc43e37ec414f3de70e9c48a6544094a1bb4c120ba5b0c4" gracePeriod=30 Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.441844 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.441904 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.442097 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.442184 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:55 crc kubenswrapper[4680]: E0126 17:10:55.550521 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbda51148_a37f_4767_8970_51ea56ecdfc7.slice/crio-990529f3fcfa8fe72d855a069bb0c66f7ac6ed98e662ad0d1e22a6c83deb7af5.scope\": RecentStats: unable to find data in memory cache]" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.827109 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.827812 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-97dr2" podUID="cba0da08-3a4e-425d-9bc0-4318cf5953be" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:10:55 crc kubenswrapper[4680]: I0126 17:10:55.828247 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-97dr2" podUID="cba0da08-3a4e-425d-9bc0-4318cf5953be" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.667650 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" event={"ID":"bda51148-a37f-4767-8970-51ea56ecdfc7","Type":"ContainerDied","Data":"990529f3fcfa8fe72d855a069bb0c66f7ac6ed98e662ad0d1e22a6c83deb7af5"} Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.667506 4680 generic.go:334] "Generic (PLEG): container finished" podID="bda51148-a37f-4767-8970-51ea56ecdfc7" containerID="990529f3fcfa8fe72d855a069bb0c66f7ac6ed98e662ad0d1e22a6c83deb7af5" exitCode=1 Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.671992 4680 scope.go:117] "RemoveContainer" containerID="990529f3fcfa8fe72d855a069bb0c66f7ac6ed98e662ad0d1e22a6c83deb7af5" Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.792034 4680 patch_prober.go:28] interesting pod/oauth-openshift-79656f7ff7-xktrf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.792167 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" podUID="78169d3a-fe9d-418a-9714-211277755dc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.792797 4680 patch_prober.go:28] interesting pod/oauth-openshift-79656f7ff7-xktrf container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.792852 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" podUID="78169d3a-fe9d-418a-9714-211277755dc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:56 crc kubenswrapper[4680]: I0126 17:10:56.834417 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f43c12ef-d0d4-4ff9-802f-652e3e4188cc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.123227 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.123781 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.283872 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.330289 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.330625 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.330326 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.330833 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.478372 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="f08c6bd8-31d6-4769-8ed3-9e46e40a4e66" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.206:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.478520 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="f08c6bd8-31d6-4769-8ed3-9e46e40a4e66" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.206:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:57 crc kubenswrapper[4680]: E0126 17:10:57.647874 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.681919 4680 generic.go:334] "Generic (PLEG): container finished" podID="db8c5f93-fbaf-4f34-9214-ec7e463beb79" containerID="3df85599ace1f883339ccfd636804b7ca4fb3f308a8aab0c2c1700e69f5cebec" exitCode=1 Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.681999 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" event={"ID":"db8c5f93-fbaf-4f34-9214-ec7e463beb79","Type":"ContainerDied","Data":"3df85599ace1f883339ccfd636804b7ca4fb3f308a8aab0c2c1700e69f5cebec"} Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.684871 4680 scope.go:117] "RemoveContainer" containerID="3df85599ace1f883339ccfd636804b7ca4fb3f308a8aab0c2c1700e69f5cebec" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.698320 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podUID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.918719 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-lhqbm" podUID="e9eb4184-e77b-49c1-b4af-cae5dc77b953" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:57 crc kubenswrapper[4680]: I0126 17:10:57.961278 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" podUID="c4788302-e01e-485b-b716-a6db7a2ac272" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.032326 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.138411 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podUID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.387520 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.440319 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbcp8" podUID="8bd876e3-9283-4de7-80b0-3c1787745bfb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.496380 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-8llvz" podUID="bba916e9-436b-4c01-ba4c-2f758ed6d988" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.538301 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pjtw6" podUID="4ef5b147-3e74-4417-9c89-e0f33fc62eba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.579361 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-5gkjr" podUID="f79a7334-75ae-40a1-81c3-ce27e0567de9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.698721 4680 generic.go:334] "Generic (PLEG): container finished" podID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" containerID="4010afe04aecf62fe6379c58ef15bae6fc63e5aa76a6eb9c59aea57ae7726747" exitCode=1 Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.698782 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" event={"ID":"cb36e4de-bd33-4daf-83f5-1ced8ce56c90","Type":"ContainerDied","Data":"4010afe04aecf62fe6379c58ef15bae6fc63e5aa76a6eb9c59aea57ae7726747"} Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.699796 4680 scope.go:117] "RemoveContainer" containerID="4010afe04aecf62fe6379c58ef15bae6fc63e5aa76a6eb9c59aea57ae7726747" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.791265 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mksvz" podUID="5140d771-5948-4407-b1d9-aa1aa80415a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:58 crc kubenswrapper[4680]: I0126 17:10:58.946343 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" podUID="bb8599b0-8155-440a-a0f5-505f73113a1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.066240 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xbcl9" podUID="505c1441-c509-4792-ac15-8b218143a69f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.141331 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" podUID="5d25f9f6-553d-477c-82f7-a25f017cb21a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.305334 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fthk4" podUID="19dbdff9-08dd-449c-8794-20b497c7119d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.502327 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-58jzp" podUID="923528ea-e48b-477c-aa11-6912e8167448" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.565487 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tlm4s" podUID="c0e0de34-8f98-4db6-abf2-856f6477119e" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.71:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.565657 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" podUID="1fd34661-ceb8-4b7a-a3f7-deedab72f5dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": read tcp 10.217.0.2:37424->10.217.0.46:8080: read: connection reset by peer" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.711874 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" event={"ID":"db8c5f93-fbaf-4f34-9214-ec7e463beb79","Type":"ContainerStarted","Data":"b36122516541b9d95cc12d791c0274bc283d635cf300e8ab9ec0313e82b45b0e"} Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.713917 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.729441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" event={"ID":"bda51148-a37f-4767-8970-51ea56ecdfc7","Type":"ContainerStarted","Data":"bffd3e65a07a981477fa8ce2836167a8966a7afe27021cee1a50bb4441cdeb11"} Jan 26 17:10:59 crc kubenswrapper[4680]: I0126 17:10:59.730230 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 17:10:59 crc kubenswrapper[4680]: E0126 17:10:59.927780 4680 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": the object has been modified; please apply your changes to the latest version and try again" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.108677 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.108731 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.108844 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.108901 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.123508 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.123602 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.170720 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:11:00 crc kubenswrapper[4680]: E0126 17:11:00.171474 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.425260 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.425277 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8657f7848d-ls2sv" podUID="34651440-00a2-4b50-a6cc-a0230d4def92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.746511 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" event={"ID":"cb36e4de-bd33-4daf-83f5-1ced8ce56c90","Type":"ContainerStarted","Data":"5788c5f4605e5f30c862452c649b3935ce81a699a501d50d2cae92f5dbf02a7b"} Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.746747 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.748805 4680 generic.go:334] "Generic (PLEG): container finished" podID="1fd34661-ceb8-4b7a-a3f7-deedab72f5dc" containerID="248bbbbfd3edd726ecb67576229c0c059a5d15cc4accb826e0dfab80afed00be" exitCode=1 Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.749370 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" event={"ID":"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc","Type":"ContainerDied","Data":"248bbbbfd3edd726ecb67576229c0c059a5d15cc4accb826e0dfab80afed00be"} Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.749890 4680 scope.go:117] "RemoveContainer" containerID="248bbbbfd3edd726ecb67576229c0c059a5d15cc4accb826e0dfab80afed00be" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.769802 4680 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-xr5vv container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.24:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.769867 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xr5vv" podUID="89cb99e5-a352-468f-bcc6-a90442f0bd6b" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.24:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.893311 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" podUID="13d27b97-b926-4a78-991d-e969612ff055" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:00 crc kubenswrapper[4680]: I0126 17:11:00.893324 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6f6cc977fc-qfzhc" podUID="13d27b97-b926-4a78-991d-e969612ff055" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.167242 4680 patch_prober.go:28] interesting pod/console-6fc5c8f49-48gmj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.167345 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6fc5c8f49-48gmj" podUID="b07144ca-cc49-4f4f-9620-88ebbdffce43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.264130 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5s7hj"] Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.270680 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.292357 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" podUID="0d273daa-c1c4-4746-9e28-abf5e15aa387" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.292401 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5847574fb9-wfhs8" podUID="0d273daa-c1c4-4746-9e28-abf5e15aa387" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.620555 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podUID="ae5969bc-48f4-499f-9ca5-6858279a47d6" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.620730 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.620764 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.620789 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.620520 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podUID="ae5969bc-48f4-499f-9ca5-6858279a47d6" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.652817 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-catalog-content\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.653307 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-utilities\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.653330 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2xt\" (UniqueName: \"kubernetes.io/projected/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-kube-api-access-ns2xt\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.755408 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-utilities\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.755467 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns2xt\" (UniqueName: \"kubernetes.io/projected/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-kube-api-access-ns2xt\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.755577 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-catalog-content\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.757523 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-utilities\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.762669 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-catalog-content\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:01 crc kubenswrapper[4680]: I0126 17:11:01.850783 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-nqg55" podUID="11ded85a-b350-41a1-b9f2-f57901f116c5" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.170373 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-jzg2h" podUID="a30260c8-eca8-456a-a94d-61839973f6ee" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.49:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.211393 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-jzg2h" podUID="a30260c8-eca8-456a-a94d-61839973f6ee" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.49:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.772731 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" event={"ID":"1fd34661-ceb8-4b7a-a3f7-deedab72f5dc","Type":"ContainerStarted","Data":"3fdc3c74aee65e75ba56912d7e506041c1f4a08fa1b98ebc8ca92c1030f9faf6"} Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.773670 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.823165 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.823193 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e6476c77-06ae-4747-900e-41566a6063ca" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.826564 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fxxq9" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.827430 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f43c12ef-d0d4-4ff9-802f-652e3e4188cc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.828548 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-fxxq9" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.830076 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.832045 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"0f7e70ab3a5e0002633849b05e2bf96b4ce0e2caa442de96c480d37d0ee9986f"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 26 17:11:02 crc kubenswrapper[4680]: I0126 17:11:02.832426 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f43c12ef-d0d4-4ff9-802f-652e3e4188cc" containerName="ceilometer-central-agent" containerID="cri-o://0f7e70ab3a5e0002633849b05e2bf96b4ce0e2caa442de96c480d37d0ee9986f" gracePeriod=30 Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.000582 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-x6xh2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.000639 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" podUID="937d1b38-2a29-4846-bb8c-7995c583ac89" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.000604 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-x6xh2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.000748 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-x6xh2" podUID="937d1b38-2a29-4846-bb8c-7995c583ac89" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.085246 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-mqkb5" podUID="71470be4-25d6-4dab-8fa6-3850938403e2" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.085340 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-mqkb5" podUID="71470be4-25d6-4dab-8fa6-3850938403e2" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.085415 4680 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.085476 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.124458 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.124520 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.124605 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.129390 4680 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-bjnls container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.129487 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-bjnls" podUID="f93ff197-4612-44d8-b67e-c98ae2906899" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.617703 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" podUID="db8c5f93-fbaf-4f34-9214-ec7e463beb79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.621881 4680 patch_prober.go:28] interesting pod/controller-manager-6d986fd6d8-rbc4h container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.621971 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" podUID="e7b8a972-ec6d-4501-80ed-cdcaba552029" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.631235 4680 patch_prober.go:28] interesting pod/controller-manager-6d986fd6d8-rbc4h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.631423 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6d986fd6d8-rbc4h" podUID="e7b8a972-ec6d-4501-80ed-cdcaba552029" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.671656 4680 patch_prober.go:28] interesting pod/route-controller-manager-798fbb5b47-4kwr8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.671724 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" podUID="41ae1a5a-420b-459d-bc28-071edd6dca3e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.671661 4680 patch_prober.go:28] interesting pod/route-controller-manager-798fbb5b47-4kwr8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.671905 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-798fbb5b47-4kwr8" podUID="41ae1a5a-420b-459d-bc28-071edd6dca3e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.839405 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5qzzv" podUID="25be1e15-1f83-4ee7-82d4-9ffd6ff46f82" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:11:03 crc kubenswrapper[4680]: I0126 17:11:03.839434 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5qzzv" podUID="25be1e15-1f83-4ee7-82d4-9ffd6ff46f82" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.185499 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" podUID="51e3cde0-6a23-4d62-83ca-fc16415da2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.186440 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vxhhf" podUID="51e3cde0-6a23-4d62-83ca-fc16415da2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.262160 4680 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.262230 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.426390 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.426466 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgknk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.598842 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.598947 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fgknk" podUID="643d9c97-4160-40e2-9f56-e200526e2a8b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.636925 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-42hb6" podUID="132d1f81-6236-4d81-a220-72d5bb914144" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:04 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:04 crc kubenswrapper[4680]: > Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.763879 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.763937 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.763951 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.763987 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.764112 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.764147 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.765018 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"c101a7bb5f9ab3d4e3529acfd0009ed52a13264d59b4dfc94abb96cd64c3e891"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.765086 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" containerID="cri-o://c101a7bb5f9ab3d4e3529acfd0009ed52a13264d59b4dfc94abb96cd64c3e891" gracePeriod=30 Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.792651 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" event={"ID":"5d25f9f6-553d-477c-82f7-a25f017cb21a","Type":"ContainerDied","Data":"a6ba8bccce8332fdf6bb09f8c3f57ea9cf2978cc776db02200efc0a3ad492c40"} Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.792877 4680 generic.go:334] "Generic (PLEG): container finished" podID="5d25f9f6-553d-477c-82f7-a25f017cb21a" containerID="a6ba8bccce8332fdf6bb09f8c3f57ea9cf2978cc776db02200efc0a3ad492c40" exitCode=1 Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.793695 4680 scope.go:117] "RemoveContainer" containerID="a6ba8bccce8332fdf6bb09f8c3f57ea9cf2978cc776db02200efc0a3ad492c40" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.824577 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.824688 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.840354 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.840432 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.840487 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-42hb6" podUID="132d1f81-6236-4d81-a220-72d5bb914144" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:04 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:04 crc kubenswrapper[4680]: > Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.840939 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-cs6qq" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:04 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:04 crc kubenswrapper[4680]: > Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.847624 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-cs6qq" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:04 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:04 crc kubenswrapper[4680]: > Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.869008 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns2xt\" (UniqueName: \"kubernetes.io/projected/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-kube-api-access-ns2xt\") pod \"community-operators-5s7hj\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:04 crc kubenswrapper[4680]: I0126 17:11:04.898831 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.093932 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.093967 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.094013 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.094032 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.094147 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.094174 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.095409 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"1a89dde78c90645addde8ab435699d67b96da71aa7b50a3acdb01013dcaa1be6"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.095472 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" containerID="cri-o://1a89dde78c90645addde8ab435699d67b96da71aa7b50a3acdb01013dcaa1be6" gracePeriod=30 Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.107542 4680 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.107592 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.107665 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.184227 4680 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-s6kzf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.184295 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" podUID="8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.187435 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.314315 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" podUID="0ba4109b-0e34-4c97-884a-d70052bf8082" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.314710 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.355302 4680 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-s6kzf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.355315 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" podUID="0ba4109b-0e34-4c97-884a-d70052bf8082" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.355376 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" podUID="8f4cdb3f-6985-4f35-a0b1-e79d9aa79ec8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.355830 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.355858 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.355894 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.356118 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.356150 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.356221 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.357270 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d9b73696c3b7a9170fcb60e40f035b6612a34b553656cc1e94dc4e3ee09fb830"} pod="openshift-ingress/router-default-5444994796-9kzqd" containerMessage="Container router failed liveness probe, will be restarted" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.357317 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" containerID="cri-o://d9b73696c3b7a9170fcb60e40f035b6612a34b553656cc1e94dc4e3ee09fb830" gracePeriod=10 Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.441905 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.442181 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.442453 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.601223 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.601291 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.601341 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.733019 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s6kzf" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.745730 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.758690 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-757cd979b5-zszgr" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.806283 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" event={"ID":"5d25f9f6-553d-477c-82f7-a25f017cb21a","Type":"ContainerStarted","Data":"790c386abb8be4ecfc553996223b1f4cad2efbb67f3505f6d8071d08d40f90aa"} Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.806651 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.808458 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.808602 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.809477 4680 generic.go:334] "Generic (PLEG): container finished" podID="bb8599b0-8155-440a-a0f5-505f73113a1c" containerID="ee28209194f02cb31abc16d2a243eb302051f6048fcebc7ca8a3c770cd567f90" exitCode=1 Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.809691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" event={"ID":"bb8599b0-8155-440a-a0f5-505f73113a1c","Type":"ContainerDied","Data":"ee28209194f02cb31abc16d2a243eb302051f6048fcebc7ca8a3c770cd567f90"} Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.811301 4680 scope.go:117] "RemoveContainer" containerID="ee28209194f02cb31abc16d2a243eb302051f6048fcebc7ca8a3c770cd567f90" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.833996 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.835310 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.849572 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.849634 4680 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="de6c2781153effe369be783af6ed293d01fef5bf82ab1cc0850270cef8b4d2d4" exitCode=1 Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.851236 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"d2450d7e7653000ecc62443cb5bbd7b899fd2f2ad47379916b6cc8493fc8894c"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.851275 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" containerID="cri-o://d2450d7e7653000ecc62443cb5bbd7b899fd2f2ad47379916b6cc8493fc8894c" gracePeriod=30 Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.851366 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.851483 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"de6c2781153effe369be783af6ed293d01fef5bf82ab1cc0850270cef8b4d2d4"} Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.851519 4680 scope.go:117] "RemoveContainer" containerID="d7c3c17bed65e90245b2df23644cb631a214de7720888b5309b0aa9134553dbf" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.852629 4680 scope.go:117] "RemoveContainer" containerID="de6c2781153effe369be783af6ed293d01fef5bf82ab1cc0850270cef8b4d2d4" Jan 26 17:11:05 crc kubenswrapper[4680]: I0126 17:11:05.922138 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.123061 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.123442 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:11:06 crc kubenswrapper[4680]: E0126 17:11:06.209522 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod453f2d30_8c6f_4626_9078_b34554f72d7b.slice/crio-c101a7bb5f9ab3d4e3529acfd0009ed52a13264d59b4dfc94abb96cd64c3e891.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97f4ad2_2288_4dd8_a30f_fb5c407855a3.slice/crio-17a801c04d4b7461de21e6164a1c16bf3893870fe791f48d720bda63efbba26a.scope\": RecentStats: unable to find data in memory cache]" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.443126 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.443331 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.443376 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.764041 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5s7hj"] Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.825987 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.828891 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-97dr2" podUID="cba0da08-3a4e-425d-9bc0-4318cf5953be" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.828954 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f43c12ef-d0d4-4ff9-802f-652e3e4188cc" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.833799 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-97dr2" podUID="cba0da08-3a4e-425d-9bc0-4318cf5953be" containerName="registry-server" probeResult="failure" output="command timed out" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.873271 4680 patch_prober.go:28] interesting pod/oauth-openshift-79656f7ff7-xktrf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.873338 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" podUID="78169d3a-fe9d-418a-9714-211277755dc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.873271 4680 patch_prober.go:28] interesting pod/oauth-openshift-79656f7ff7-xktrf container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.873386 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79656f7ff7-xktrf" podUID="78169d3a-fe9d-418a-9714-211277755dc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.877526 4680 generic.go:334] "Generic (PLEG): container finished" podID="c4788302-e01e-485b-b716-a6db7a2ac272" containerID="3a5c8662c090ed2563e0e852c4113db7062512d19d779887c9c4b3bca7a6c298" exitCode=1 Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.877648 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" event={"ID":"c4788302-e01e-485b-b716-a6db7a2ac272","Type":"ContainerDied","Data":"3a5c8662c090ed2563e0e852c4113db7062512d19d779887c9c4b3bca7a6c298"} Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.878379 4680 scope.go:117] "RemoveContainer" containerID="3a5c8662c090ed2563e0e852c4113db7062512d19d779887c9c4b3bca7a6c298" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.890828 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" event={"ID":"bb8599b0-8155-440a-a0f5-505f73113a1c","Type":"ContainerStarted","Data":"a85d825975ac0c7c7cc41eb8884374d68cd740da9e408e03bdd201397efa91eb"} Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.891775 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.911990 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.912028 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.913041 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.943316 4680 generic.go:334] "Generic (PLEG): container finished" podID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerID="1a89dde78c90645addde8ab435699d67b96da71aa7b50a3acdb01013dcaa1be6" exitCode=0 Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.943538 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" event={"ID":"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1","Type":"ContainerDied","Data":"1a89dde78c90645addde8ab435699d67b96da71aa7b50a3acdb01013dcaa1be6"} Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.943822 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.944890 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.944946 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.947098 4680 generic.go:334] "Generic (PLEG): container finished" podID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerID="c101a7bb5f9ab3d4e3529acfd0009ed52a13264d59b4dfc94abb96cd64c3e891" exitCode=0 Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.947174 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" event={"ID":"453f2d30-8c6f-4626-9078-b34554f72d7b","Type":"ContainerDied","Data":"c101a7bb5f9ab3d4e3529acfd0009ed52a13264d59b4dfc94abb96cd64c3e891"} Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.948140 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.948218 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.948250 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.951717 4680 generic.go:334] "Generic (PLEG): container finished" podID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerID="17a801c04d4b7461de21e6164a1c16bf3893870fe791f48d720bda63efbba26a" exitCode=0 Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.951850 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" event={"ID":"b97f4ad2-2288-4dd8-a30f-fb5c407855a3","Type":"ContainerDied","Data":"17a801c04d4b7461de21e6164a1c16bf3893870fe791f48d720bda63efbba26a"} Jan 26 17:11:06 crc kubenswrapper[4680]: I0126 17:11:06.999285 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.097487 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podUID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.097519 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podUID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.097615 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.097989 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" podUID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.292397 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzvqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.292789 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" podUID="bda51148-a37f-4767-8970-51ea56ecdfc7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.292840 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" podUID="5283315c-decc-4a61-aee5-74715a2f2393" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.293003 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.328384 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xzvqm" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.347912 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.348479 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.348556 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.349775 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" podUID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.466555 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": EOF" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.466648 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.468181 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" podUID="58579a35-1ab3-4610-9d38-66824866b438" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.478362 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="f08c6bd8-31d6-4769-8ed3-9e46e40a4e66" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.206:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.478610 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="f08c6bd8-31d6-4769-8ed3-9e46e40a4e66" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.206:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.698245 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" podUID="cb36e4de-bd33-4daf-83f5-1ced8ce56c90" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.918794 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.998360 4680 generic.go:334] "Generic (PLEG): container finished" podID="c9bc9b0e-b690-47c1-92ea-bea335fc0b41" containerID="81c12121e4e9f48b25a681a397252bd83835e51502babae0efc0e3b161e4fd64" exitCode=1 Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.998463 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" event={"ID":"c9bc9b0e-b690-47c1-92ea-bea335fc0b41","Type":"ContainerDied","Data":"81c12121e4e9f48b25a681a397252bd83835e51502babae0efc0e3b161e4fd64"} Jan 26 17:11:07 crc kubenswrapper[4680]: I0126 17:11:07.999053 4680 scope.go:117] "RemoveContainer" containerID="81c12121e4e9f48b25a681a397252bd83835e51502babae0efc0e3b161e4fd64" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.010351 4680 generic.go:334] "Generic (PLEG): container finished" podID="58579a35-1ab3-4610-9d38-66824866b438" containerID="a0c5f7103679d530bd07e96b34e34ded107bda0a36b8734db02901588d7a0f7c" exitCode=1 Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.010458 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" event={"ID":"58579a35-1ab3-4610-9d38-66824866b438","Type":"ContainerDied","Data":"a0c5f7103679d530bd07e96b34e34ded107bda0a36b8734db02901588d7a0f7c"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.011122 4680 scope.go:117] "RemoveContainer" containerID="a0c5f7103679d530bd07e96b34e34ded107bda0a36b8734db02901588d7a0f7c" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.013826 4680 generic.go:334] "Generic (PLEG): container finished" podID="27eb5e1a-3047-4e87-9ad1-f948e11dfe25" containerID="51174230eb6925176d4a979efd4998e070b4ee7a3ba6d62f880e406d4d334059" exitCode=1 Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.013933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" event={"ID":"27eb5e1a-3047-4e87-9ad1-f948e11dfe25","Type":"ContainerDied","Data":"51174230eb6925176d4a979efd4998e070b4ee7a3ba6d62f880e406d4d334059"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.014587 4680 scope.go:117] "RemoveContainer" containerID="51174230eb6925176d4a979efd4998e070b4ee7a3ba6d62f880e406d4d334059" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.043230 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" event={"ID":"b97f4ad2-2288-4dd8-a30f-fb5c407855a3","Type":"ContainerStarted","Data":"d670aaf0554e57b958ca9b9e4542f48ce54da955d3914ab79d30efa6693c7ea2"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.044201 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.055393 4680 generic.go:334] "Generic (PLEG): container finished" podID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerID="d2450d7e7653000ecc62443cb5bbd7b899fd2f2ad47379916b6cc8493fc8894c" exitCode=0 Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.056042 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" event={"ID":"1d1c4ec3-28a9-4741-844b-ea48d59f84d3","Type":"ContainerDied","Data":"d2450d7e7653000ecc62443cb5bbd7b899fd2f2ad47379916b6cc8493fc8894c"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.065130 4680 generic.go:334] "Generic (PLEG): container finished" podID="5c81deb3-0ad3-4ec0-91af-837aee09d577" containerID="13817b73a3b5874f8457cfbbcc68f2ed27600bb263af6f1e9b9092ab558bae88" exitCode=1 Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.065214 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" event={"ID":"5c81deb3-0ad3-4ec0-91af-837aee09d577","Type":"ContainerDied","Data":"13817b73a3b5874f8457cfbbcc68f2ed27600bb263af6f1e9b9092ab558bae88"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.065833 4680 scope.go:117] "RemoveContainer" containerID="13817b73a3b5874f8457cfbbcc68f2ed27600bb263af6f1e9b9092ab558bae88" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.079560 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" event={"ID":"c4788302-e01e-485b-b716-a6db7a2ac272","Type":"ContainerStarted","Data":"62933f48019996ad86c16a31563c591cbbea4f1ae1bec926a019cdd31710ed49"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.080425 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.095822 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.103750 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8625f23a2eaff25ccd6bcb8dabe5ae737b6b7a76e0dcc6ebf68187c72bc1266"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.113797 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" event={"ID":"0bf062c4-962d-4bba-98d4-ee41fe1cc0b1","Type":"ContainerStarted","Data":"43fa2bec1236dcd6e52e8cc95d7176c082e23651a6b8db9e2bb631e803def8b5"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.116305 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.116366 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.130666 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.130910 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.131272 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" event={"ID":"453f2d30-8c6f-4626-9078-b34554f72d7b","Type":"ContainerStarted","Data":"5dd6a57d85922c72f9b05ad9f4536d922035bac01fd010f6e68e7d60c98080f9"} Jan 26 17:11:08 crc kubenswrapper[4680]: I0126 17:11:08.469364 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbd5d"] Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.142926 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" event={"ID":"c9bc9b0e-b690-47c1-92ea-bea335fc0b41","Type":"ContainerStarted","Data":"8d7aad97105fad9d4c5aae20682defada0bf663a03e7d073fec0098f5c4ee0c8"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.144156 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.158136 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" event={"ID":"58579a35-1ab3-4610-9d38-66824866b438","Type":"ContainerStarted","Data":"34e285b35bbb8e3853374d34cbb3231a731adc4b4c9865d9cc440abd3b166b8a"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.159230 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.161191 4680 generic.go:334] "Generic (PLEG): container finished" podID="c03dc150-d5dd-4cb8-8250-e2b2b95980dd" containerID="653dbe82129f079a67cd2153cb7c52fdbdca552f965cdee8f0be22c2457fab91" exitCode=0 Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.161243 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbd5d" event={"ID":"c03dc150-d5dd-4cb8-8250-e2b2b95980dd","Type":"ContainerDied","Data":"653dbe82129f079a67cd2153cb7c52fdbdca552f965cdee8f0be22c2457fab91"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.161264 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbd5d" event={"ID":"c03dc150-d5dd-4cb8-8250-e2b2b95980dd","Type":"ContainerStarted","Data":"47b54ac1456086857db55e5807ba985fe3052462311cfb121eb54aa7833170b3"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.168136 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" event={"ID":"27eb5e1a-3047-4e87-9ad1-f948e11dfe25","Type":"ContainerStarted","Data":"ce9f78432ccc587d7b07d714ab8f25e00ddc274042b3db61c112af0f6e92459c"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.169650 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.188494 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.188532 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.202326 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.202368 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" event={"ID":"1d1c4ec3-28a9-4741-844b-ea48d59f84d3","Type":"ContainerStarted","Data":"e792ab2c3156f3848a515fc3fa0ebc87f74b1f2ce66c777a32ded87693353f7b"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.202392 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" event={"ID":"5c81deb3-0ad3-4ec0-91af-837aee09d577","Type":"ContainerStarted","Data":"735a2a89b293603b71a55dc95767321290b6501197671cba2d4786b74bd478cc"} Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.202411 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.206864 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnhhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.206908 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" podUID="453f2d30-8c6f-4626-9078-b34554f72d7b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.206984 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.207635 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cg5d7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.207765 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" podUID="0bf062c4-962d-4bba-98d4-ee41fe1cc0b1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 17:11:09 crc kubenswrapper[4680]: I0126 17:11:09.868835 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5s7hj"] Jan 26 17:11:10 crc kubenswrapper[4680]: W0126 17:11:10.031084 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2d682c0_50c8_4fba_9f4d_b0f0e2abdbb1.slice/crio-077cfb59a4cbe824f580914e0ae761ca0b5b93cc28ad898fc8119b6c5fee8d65 WatchSource:0}: Error finding container 077cfb59a4cbe824f580914e0ae761ca0b5b93cc28ad898fc8119b6c5fee8d65: Status 404 returned error can't find the container with id 077cfb59a4cbe824f580914e0ae761ca0b5b93cc28ad898fc8119b6c5fee8d65 Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.214873 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerStarted","Data":"077cfb59a4cbe824f580914e0ae761ca0b5b93cc28ad898fc8119b6c5fee8d65"} Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.218052 4680 generic.go:334] "Generic (PLEG): container finished" podID="f43c12ef-d0d4-4ff9-802f-652e3e4188cc" containerID="0f7e70ab3a5e0002633849b05e2bf96b4ce0e2caa442de96c480d37d0ee9986f" exitCode=0 Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.218109 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerDied","Data":"0f7e70ab3a5e0002633849b05e2bf96b4ce0e2caa442de96c480d37d0ee9986f"} Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.218752 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vjf22 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.218789 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" podUID="1d1c4ec3-28a9-4741-844b-ea48d59f84d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.220292 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.220320 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:11:10 crc kubenswrapper[4680]: I0126 17:11:10.841496 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:11:11 crc kubenswrapper[4680]: I0126 17:11:11.251459 4680 generic.go:334] "Generic (PLEG): container finished" podID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerID="7316a1c8390bfa44461122199ee9bca47c4b56db2c346d58b43f0a436813a207" exitCode=0 Jan 26 17:11:11 crc kubenswrapper[4680]: I0126 17:11:11.252185 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerDied","Data":"7316a1c8390bfa44461122199ee9bca47c4b56db2c346d58b43f0a436813a207"} Jan 26 17:11:11 crc kubenswrapper[4680]: I0126 17:11:11.259319 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f43c12ef-d0d4-4ff9-802f-652e3e4188cc","Type":"ContainerStarted","Data":"e41125f5bf390a5703c6502fbf93ff542139a8f908234a75ced62b4afc423ccf"} Jan 26 17:11:11 crc kubenswrapper[4680]: I0126 17:11:11.891679 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" containerID="cri-o://1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226" gracePeriod=24 Jan 26 17:11:12 crc kubenswrapper[4680]: I0126 17:11:12.122559 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:11:12 crc kubenswrapper[4680]: I0126 17:11:12.122572 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9bv9l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:11:12 crc kubenswrapper[4680]: I0126 17:11:12.122636 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:11:12 crc kubenswrapper[4680]: I0126 17:11:12.122641 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" podUID="b97f4ad2-2288-4dd8-a30f-fb5c407855a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:11:13 crc kubenswrapper[4680]: E0126 17:11:13.041634 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 26 17:11:13 crc kubenswrapper[4680]: E0126 17:11:13.044581 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 26 17:11:13 crc kubenswrapper[4680]: E0126 17:11:13.052932 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 26 17:11:13 crc kubenswrapper[4680]: E0126 17:11:13.053005 4680 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" Jan 26 17:11:13 crc kubenswrapper[4680]: I0126 17:11:13.288602 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerStarted","Data":"f38cf59faa67e3e3c55837e0608bf48d9f79900847cc65632f85d736b388c5aa"} Jan 26 17:11:13 crc kubenswrapper[4680]: I0126 17:11:13.629931 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854r9htd" Jan 26 17:11:13 crc kubenswrapper[4680]: I0126 17:11:13.809756 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnhhn" Jan 26 17:11:14 crc kubenswrapper[4680]: I0126 17:11:14.097405 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cg5d7" Jan 26 17:11:14 crc kubenswrapper[4680]: I0126 17:11:14.157844 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 26 17:11:14 crc kubenswrapper[4680]: [+]has-synced ok Jan 26 17:11:14 crc kubenswrapper[4680]: [-]process-running failed: reason withheld Jan 26 17:11:14 crc kubenswrapper[4680]: healthz check failed Jan 26 17:11:14 crc kubenswrapper[4680]: I0126 17:11:14.157913 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:11:14 crc kubenswrapper[4680]: I0126 17:11:14.170753 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:11:14 crc kubenswrapper[4680]: E0126 17:11:14.171604 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:11:14 crc kubenswrapper[4680]: I0126 17:11:14.447236 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vjf22" Jan 26 17:11:15 crc kubenswrapper[4680]: I0126 17:11:15.131406 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9bv9l" Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.363525 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerID="1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226" exitCode=0 Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.364268 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4","Type":"ContainerDied","Data":"1e99192283eac284a968e452e75abe34ab658adcb78dea1b00c7c4f7b000b226"} Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.380887 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-9kzqd_c5b7fdeb-1a25-4193-bf77-6645f6e8370a/router/0.log" Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.380959 4680 generic.go:334] "Generic (PLEG): container finished" podID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerID="d9b73696c3b7a9170fcb60e40f035b6612a34b553656cc1e94dc4e3ee09fb830" exitCode=137 Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.381101 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9kzqd" event={"ID":"c5b7fdeb-1a25-4193-bf77-6645f6e8370a","Type":"ContainerDied","Data":"d9b73696c3b7a9170fcb60e40f035b6612a34b553656cc1e94dc4e3ee09fb830"} Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.386521 4680 generic.go:334] "Generic (PLEG): container finished" podID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerID="f38cf59faa67e3e3c55837e0608bf48d9f79900847cc65632f85d736b388c5aa" exitCode=0 Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.386560 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerDied","Data":"f38cf59faa67e3e3c55837e0608bf48d9f79900847cc65632f85d736b388c5aa"} Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.921832 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-xmpgf" Jan 26 17:11:16 crc kubenswrapper[4680]: I0126 17:11:16.994705 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-t4zl4" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.099919 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-9nr5c" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.136569 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mmcpt" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.295094 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-sl65h" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.362516 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-789vw" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.718178 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qjcpj" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.918744 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.929458 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-xjmwr" Jan 26 17:11:17 crc kubenswrapper[4680]: I0126 17:11:17.931140 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:11:18 crc kubenswrapper[4680]: I0126 17:11:18.116497 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-chmcm" Jan 26 17:11:18 crc kubenswrapper[4680]: I0126 17:11:18.432327 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:11:20 crc kubenswrapper[4680]: I0126 17:11:20.752748 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mwhz9"] Jan 26 17:11:20 crc kubenswrapper[4680]: I0126 17:11:20.778498 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:20 crc kubenswrapper[4680]: I0126 17:11:20.906644 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwhz9"] Jan 26 17:11:20 crc kubenswrapper[4680]: I0126 17:11:20.931029 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-utilities\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:20 crc kubenswrapper[4680]: I0126 17:11:20.931206 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-catalog-content\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:20 crc kubenswrapper[4680]: I0126 17:11:20.931414 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v8qx\" (UniqueName: \"kubernetes.io/projected/d63bab54-5ae3-4e15-a517-eae441a6dbbf-kube-api-access-8v8qx\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.033195 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-catalog-content\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.033301 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v8qx\" (UniqueName: \"kubernetes.io/projected/d63bab54-5ae3-4e15-a517-eae441a6dbbf-kube-api-access-8v8qx\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.033428 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-utilities\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.034346 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-catalog-content\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.034660 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-utilities\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.073922 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v8qx\" (UniqueName: \"kubernetes.io/projected/d63bab54-5ae3-4e15-a517-eae441a6dbbf-kube-api-access-8v8qx\") pod \"redhat-operators-mwhz9\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:21 crc kubenswrapper[4680]: I0126 17:11:21.106795 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:22 crc kubenswrapper[4680]: I0126 17:11:22.513147 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbd5d" event={"ID":"c03dc150-d5dd-4cb8-8250-e2b2b95980dd","Type":"ContainerStarted","Data":"c0f4916fbac9bdf982e71cca7e613149f62060e50e8bf103f1cc1e8ab213c929"} Jan 26 17:11:22 crc kubenswrapper[4680]: I0126 17:11:22.529758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6e6f45ac-80ed-41f2-b9b8-94e60a1656d4","Type":"ContainerStarted","Data":"d0bd281b17452c40fdd385833b83e297b9d8cc2b5e8131ed8968021580f92602"} Jan 26 17:11:22 crc kubenswrapper[4680]: I0126 17:11:22.540221 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-9kzqd_c5b7fdeb-1a25-4193-bf77-6645f6e8370a/router/0.log" Jan 26 17:11:22 crc kubenswrapper[4680]: I0126 17:11:22.540301 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9kzqd" event={"ID":"c5b7fdeb-1a25-4193-bf77-6645f6e8370a","Type":"ContainerStarted","Data":"3369d457179d37893aa9d1bf6ef63d1c365c497a4d6c287aa1f192574c85641f"} Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.035588 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.035913 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.119771 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.123296 4680 patch_prober.go:28] interesting pod/router-default-5444994796-9kzqd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:11:23 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Jan 26 17:11:23 crc kubenswrapper[4680]: [+]process-running ok Jan 26 17:11:23 crc kubenswrapper[4680]: healthz check failed Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.123663 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kzqd" podUID="c5b7fdeb-1a25-4193-bf77-6645f6e8370a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.386298 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwhz9"] Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.551856 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerStarted","Data":"32d1b3b502d262053594ea6f07fca20f95ac81bffc6b3ddd42abee15a536c5ad"} Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.555035 4680 generic.go:334] "Generic (PLEG): container finished" podID="c03dc150-d5dd-4cb8-8250-e2b2b95980dd" containerID="c0f4916fbac9bdf982e71cca7e613149f62060e50e8bf103f1cc1e8ab213c929" exitCode=0 Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.555112 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbd5d" event={"ID":"c03dc150-d5dd-4cb8-8250-e2b2b95980dd","Type":"ContainerDied","Data":"c0f4916fbac9bdf982e71cca7e613149f62060e50e8bf103f1cc1e8ab213c929"} Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.556441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerStarted","Data":"9f7b6efbda9a33829dae57d89461d0783462a5b990ba3b351c8e6d206951d6a0"} Jan 26 17:11:23 crc kubenswrapper[4680]: I0126 17:11:23.627586 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5s7hj" podStartSLOduration=28.497960729 podStartE2EDuration="39.627563517s" podCreationTimestamp="2026-01-26 17:10:44 +0000 UTC" firstStartedPulling="2026-01-26 17:11:11.25423754 +0000 UTC m=+3946.415509809" lastFinishedPulling="2026-01-26 17:11:22.383840328 +0000 UTC m=+3957.545112597" observedRunningTime="2026-01-26 17:11:23.58746388 +0000 UTC m=+3958.748736149" watchObservedRunningTime="2026-01-26 17:11:23.627563517 +0000 UTC m=+3958.788835786" Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.119336 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.127060 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.568948 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbd5d" event={"ID":"c03dc150-d5dd-4cb8-8250-e2b2b95980dd","Type":"ContainerStarted","Data":"2c63b51a813a9760651c0b82d034e4179dbee78be89a902a4822d131ef40419b"} Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.571560 4680 generic.go:334] "Generic (PLEG): container finished" podID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerID="93461d1f4a346449d3badd0070500f8e9720c1a855704d81c37e74190648a7f3" exitCode=0 Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.573479 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerDied","Data":"93461d1f4a346449d3badd0070500f8e9720c1a855704d81c37e74190648a7f3"} Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.578712 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9kzqd" Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.594141 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbd5d" podStartSLOduration=29.649961897 podStartE2EDuration="44.594124327s" podCreationTimestamp="2026-01-26 17:10:40 +0000 UTC" firstStartedPulling="2026-01-26 17:11:09.163893192 +0000 UTC m=+3944.325165461" lastFinishedPulling="2026-01-26 17:11:24.108055622 +0000 UTC m=+3959.269327891" observedRunningTime="2026-01-26 17:11:24.588012662 +0000 UTC m=+3959.749284941" watchObservedRunningTime="2026-01-26 17:11:24.594124327 +0000 UTC m=+3959.755396596" Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.899876 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:24 crc kubenswrapper[4680]: I0126 17:11:24.899942 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:25 crc kubenswrapper[4680]: I0126 17:11:25.584359 4680 generic.go:334] "Generic (PLEG): container finished" podID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerID="5e48ce683c69f87c7bc43e37ec414f3de70e9c48a6544094a1bb4c120ba5b0c4" exitCode=137 Jan 26 17:11:25 crc kubenswrapper[4680]: I0126 17:11:25.584442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerDied","Data":"5e48ce683c69f87c7bc43e37ec414f3de70e9c48a6544094a1bb4c120ba5b0c4"} Jan 26 17:11:25 crc kubenswrapper[4680]: I0126 17:11:25.972720 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5s7hj" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:25 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:25 crc kubenswrapper[4680]: > Jan 26 17:11:26 crc kubenswrapper[4680]: I0126 17:11:26.169746 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:11:26 crc kubenswrapper[4680]: E0126 17:11:26.170029 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:11:26 crc kubenswrapper[4680]: I0126 17:11:26.596209 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerStarted","Data":"87e3480b334053321ae94c5eefca8012e923f2108f20a8b8dff4ca67e1b2df5f"} Jan 26 17:11:29 crc kubenswrapper[4680]: I0126 17:11:29.714136 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"3de73d571aac728f7f2438857216102a2be4b68e68e8fb9ad5bdf9c3cc89518b"} Jan 26 17:11:31 crc kubenswrapper[4680]: I0126 17:11:31.743514 4680 generic.go:334] "Generic (PLEG): container finished" podID="4a678bad-96c4-45a4-8f56-51b4763655b1" containerID="a499704ab2790a08e46423f2c522171c084e470fc7e32e0331fd4392da2966c1" exitCode=1 Jan 26 17:11:31 crc kubenswrapper[4680]: I0126 17:11:31.743604 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"4a678bad-96c4-45a4-8f56-51b4763655b1","Type":"ContainerDied","Data":"a499704ab2790a08e46423f2c522171c084e470fc7e32e0331fd4392da2966c1"} Jan 26 17:11:31 crc kubenswrapper[4680]: I0126 17:11:31.837557 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:11:31 crc kubenswrapper[4680]: I0126 17:11:31.837808 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:11:31 crc kubenswrapper[4680]: I0126 17:11:31.886037 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:11:32 crc kubenswrapper[4680]: I0126 17:11:32.808199 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbd5d" Jan 26 17:11:33 crc kubenswrapper[4680]: I0126 17:11:33.941455 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbd5d"] Jan 26 17:11:34 crc kubenswrapper[4680]: I0126 17:11:34.301961 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxxq9"] Jan 26 17:11:34 crc kubenswrapper[4680]: I0126 17:11:34.398977 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fxxq9" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" containerID="cri-o://47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9" gracePeriod=2 Jan 26 17:11:35 crc kubenswrapper[4680]: I0126 17:11:35.803237 4680 generic.go:334] "Generic (PLEG): container finished" podID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerID="87e3480b334053321ae94c5eefca8012e923f2108f20a8b8dff4ca67e1b2df5f" exitCode=0 Jan 26 17:11:35 crc kubenswrapper[4680]: I0126 17:11:35.804623 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerDied","Data":"87e3480b334053321ae94c5eefca8012e923f2108f20a8b8dff4ca67e1b2df5f"} Jan 26 17:11:35 crc kubenswrapper[4680]: I0126 17:11:35.985265 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5s7hj" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:35 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:35 crc kubenswrapper[4680]: > Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.252477 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355157 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ca-certs\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355457 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-config-data\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355580 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-workdir\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355638 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355670 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355759 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnzn4\" (UniqueName: \"kubernetes.io/projected/4a678bad-96c4-45a4-8f56-51b4763655b1-kube-api-access-gnzn4\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355795 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-temporary\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355833 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ssh-key\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.355864 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config-secret\") pod \"4a678bad-96c4-45a4-8f56-51b4763655b1\" (UID: \"4a678bad-96c4-45a4-8f56-51b4763655b1\") " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.361718 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.366400 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.367917 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-config-data" (OuterVolumeSpecName: "config-data") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.394531 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.418312 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a678bad-96c4-45a4-8f56-51b4763655b1-kube-api-access-gnzn4" (OuterVolumeSpecName: "kube-api-access-gnzn4") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "kube-api-access-gnzn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.460464 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnzn4\" (UniqueName: \"kubernetes.io/projected/4a678bad-96c4-45a4-8f56-51b4763655b1-kube-api-access-gnzn4\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.460493 4680 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.460506 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.460515 4680 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4a678bad-96c4-45a4-8f56-51b4763655b1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.460546 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.550250 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.563431 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.577942 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.617424 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.624501 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.650234 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4a678bad-96c4-45a4-8f56-51b4763655b1" (UID: "4a678bad-96c4-45a4-8f56-51b4763655b1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.665027 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a678bad-96c4-45a4-8f56-51b4763655b1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.665060 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.665090 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.665102 4680 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4a678bad-96c4-45a4-8f56-51b4763655b1-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.751420 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 26 17:11:36 crc kubenswrapper[4680]: E0126 17:11:36.756357 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a678bad-96c4-45a4-8f56-51b4763655b1" containerName="tempest-tests-tempest-tests-runner" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.756515 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a678bad-96c4-45a4-8f56-51b4763655b1" containerName="tempest-tests-tempest-tests-runner" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.756919 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a678bad-96c4-45a4-8f56-51b4763655b1" containerName="tempest-tests-tempest-tests-runner" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.757593 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.770743 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.773284 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.774588 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.854426 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxxq9_7906ec80-b796-4c46-9867-cf61576a73b7/registry-server/0.log" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.859851 4680 generic.go:334] "Generic (PLEG): container finished" podID="7906ec80-b796-4c46-9867-cf61576a73b7" containerID="47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9" exitCode=137 Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.859933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxxq9" event={"ID":"7906ec80-b796-4c46-9867-cf61576a73b7","Type":"ContainerDied","Data":"47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9"} Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874361 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874400 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874439 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874464 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874498 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874518 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874562 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874611 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.874664 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df47b\" (UniqueName: \"kubernetes.io/projected/3c38590c-46c7-4af3-8791-04b8c4830b6f-kube-api-access-df47b\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.889430 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"4a678bad-96c4-45a4-8f56-51b4763655b1","Type":"ContainerDied","Data":"b7d35b33f2c73779c7785e17a40ae39b92a98fb8cf55d220b6958968a912ee3f"} Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.889517 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.889706 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d35b33f2c73779c7785e17a40ae39b92a98fb8cf55d220b6958968a912ee3f" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978231 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978300 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df47b\" (UniqueName: \"kubernetes.io/projected/3c38590c-46c7-4af3-8791-04b8c4830b6f-kube-api-access-df47b\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978431 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978449 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978486 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978508 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978541 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.978567 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.982620 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.983057 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.983093 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.983569 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.986025 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.990814 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:36 crc kubenswrapper[4680]: I0126 17:11:36.993629 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:37 crc kubenswrapper[4680]: I0126 17:11:37.014867 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:37 crc kubenswrapper[4680]: I0126 17:11:37.045774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df47b\" (UniqueName: \"kubernetes.io/projected/3c38590c-46c7-4af3-8791-04b8c4830b6f-kube-api-access-df47b\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:37 crc kubenswrapper[4680]: I0126 17:11:37.076757 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:37 crc kubenswrapper[4680]: I0126 17:11:37.172623 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:11:37 crc kubenswrapper[4680]: E0126 17:11:37.172823 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:11:37 crc kubenswrapper[4680]: I0126 17:11:37.380447 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 17:11:37 crc kubenswrapper[4680]: E0126 17:11:37.808420 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9 is running failed: container process not found" containerID="47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:11:37 crc kubenswrapper[4680]: E0126 17:11:37.810276 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9 is running failed: container process not found" containerID="47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:11:37 crc kubenswrapper[4680]: E0126 17:11:37.811704 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9 is running failed: container process not found" containerID="47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:11:37 crc kubenswrapper[4680]: E0126 17:11:37.811749 4680 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fxxq9" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" Jan 26 17:11:37 crc kubenswrapper[4680]: I0126 17:11:37.980259 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerStarted","Data":"087c66d8e9fc28a61b1e6447be6f2461251ac2d22c082a6d49af2641f44fcfbb"} Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.047624 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mwhz9" podStartSLOduration=6.171807128 podStartE2EDuration="18.047598113s" podCreationTimestamp="2026-01-26 17:11:20 +0000 UTC" firstStartedPulling="2026-01-26 17:11:24.57501866 +0000 UTC m=+3959.736290929" lastFinishedPulling="2026-01-26 17:11:36.450809645 +0000 UTC m=+3971.612081914" observedRunningTime="2026-01-26 17:11:38.037924956 +0000 UTC m=+3973.199197245" watchObservedRunningTime="2026-01-26 17:11:38.047598113 +0000 UTC m=+3973.208870382" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.049697 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxxq9_7906ec80-b796-4c46-9867-cf61576a73b7/registry-server/0.log" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.065500 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxxq9_7906ec80-b796-4c46-9867-cf61576a73b7/registry-server/0.log" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.065688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxxq9" event={"ID":"7906ec80-b796-4c46-9867-cf61576a73b7","Type":"ContainerDied","Data":"4f50c6439be94d63b43218b71adfe5bbb95275892a7d3e27c58ff897c9c6e216"} Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.065712 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f50c6439be94d63b43218b71adfe5bbb95275892a7d3e27c58ff897c9c6e216" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.066462 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.140972 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-utilities\") pod \"7906ec80-b796-4c46-9867-cf61576a73b7\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.141046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-catalog-content\") pod \"7906ec80-b796-4c46-9867-cf61576a73b7\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.141114 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j58d6\" (UniqueName: \"kubernetes.io/projected/7906ec80-b796-4c46-9867-cf61576a73b7-kube-api-access-j58d6\") pod \"7906ec80-b796-4c46-9867-cf61576a73b7\" (UID: \"7906ec80-b796-4c46-9867-cf61576a73b7\") " Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.143658 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-utilities" (OuterVolumeSpecName: "utilities") pod "7906ec80-b796-4c46-9867-cf61576a73b7" (UID: "7906ec80-b796-4c46-9867-cf61576a73b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.154509 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7906ec80-b796-4c46-9867-cf61576a73b7-kube-api-access-j58d6" (OuterVolumeSpecName: "kube-api-access-j58d6") pod "7906ec80-b796-4c46-9867-cf61576a73b7" (UID: "7906ec80-b796-4c46-9867-cf61576a73b7"). InnerVolumeSpecName "kube-api-access-j58d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.195914 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7906ec80-b796-4c46-9867-cf61576a73b7" (UID: "7906ec80-b796-4c46-9867-cf61576a73b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.245335 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j58d6\" (UniqueName: \"kubernetes.io/projected/7906ec80-b796-4c46-9867-cf61576a73b7-kube-api-access-j58d6\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.245378 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.245391 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7906ec80-b796-4c46-9867-cf61576a73b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:38 crc kubenswrapper[4680]: I0126 17:11:38.668818 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 26 17:11:39 crc kubenswrapper[4680]: I0126 17:11:39.075902 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"3c38590c-46c7-4af3-8791-04b8c4830b6f","Type":"ContainerStarted","Data":"44f76a6e359304fc983593e6aecf3f0cc5496b1b0142c9be9ea2ca4282a701f0"} Jan 26 17:11:39 crc kubenswrapper[4680]: I0126 17:11:39.075950 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxxq9" Jan 26 17:11:39 crc kubenswrapper[4680]: I0126 17:11:39.325968 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxxq9"] Jan 26 17:11:39 crc kubenswrapper[4680]: I0126 17:11:39.328562 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57c4fd656d-gzjfg" Jan 26 17:11:39 crc kubenswrapper[4680]: I0126 17:11:39.350841 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxxq9"] Jan 26 17:11:40 crc kubenswrapper[4680]: I0126 17:11:40.344808 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 17:11:40 crc kubenswrapper[4680]: I0126 17:11:40.915703 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output=< Jan 26 17:11:40 crc kubenswrapper[4680]: wsrep_local_state_comment (Joined) differs from Synced Jan 26 17:11:40 crc kubenswrapper[4680]: > Jan 26 17:11:41 crc kubenswrapper[4680]: I0126 17:11:41.107610 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:41 crc kubenswrapper[4680]: I0126 17:11:41.107677 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:11:41 crc kubenswrapper[4680]: I0126 17:11:41.183877 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" path="/var/lib/kubelet/pods/7906ec80-b796-4c46-9867-cf61576a73b7/volumes" Jan 26 17:11:42 crc kubenswrapper[4680]: I0126 17:11:42.188949 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mwhz9" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:42 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:42 crc kubenswrapper[4680]: > Jan 26 17:11:43 crc kubenswrapper[4680]: I0126 17:11:43.129655 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"3c38590c-46c7-4af3-8791-04b8c4830b6f","Type":"ContainerStarted","Data":"920a287113b5f2d348d07808d9064569bfb2b6458b1911a4339ee880f6349c5b"} Jan 26 17:11:43 crc kubenswrapper[4680]: I0126 17:11:43.153736 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=7.153718471 podStartE2EDuration="7.153718471s" podCreationTimestamp="2026-01-26 17:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:11:43.151295831 +0000 UTC m=+3978.312568100" watchObservedRunningTime="2026-01-26 17:11:43.153718471 +0000 UTC m=+3978.314990740" Jan 26 17:11:43 crc kubenswrapper[4680]: I0126 17:11:43.240266 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 17:11:45 crc kubenswrapper[4680]: I0126 17:11:45.108627 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:45 crc kubenswrapper[4680]: I0126 17:11:45.246662 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:46 crc kubenswrapper[4680]: I0126 17:11:46.168610 4680 scope.go:117] "RemoveContainer" containerID="6279456bdce49f8c09f230c770944ff2c95f77bc7700dc830e1e9ea32b0e6c6d" Jan 26 17:11:46 crc kubenswrapper[4680]: I0126 17:11:46.198483 4680 scope.go:117] "RemoveContainer" containerID="ddb6f77b2fe11d27e440c4663fd6684674643c5328563c970fa27b8eeadbf0c4" Jan 26 17:11:46 crc kubenswrapper[4680]: I0126 17:11:46.274950 4680 scope.go:117] "RemoveContainer" containerID="47887ab4bbcc7b41b3d9b31c953f0c01279831f954229a8be6c55ef506c5d7d9" Jan 26 17:11:46 crc kubenswrapper[4680]: I0126 17:11:46.473618 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5s7hj"] Jan 26 17:11:46 crc kubenswrapper[4680]: I0126 17:11:46.473883 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5s7hj" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="registry-server" containerID="cri-o://32d1b3b502d262053594ea6f07fca20f95ac81bffc6b3ddd42abee15a536c5ad" gracePeriod=2 Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.190402 4680 generic.go:334] "Generic (PLEG): container finished" podID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerID="32d1b3b502d262053594ea6f07fca20f95ac81bffc6b3ddd42abee15a536c5ad" exitCode=0 Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.194870 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerDied","Data":"32d1b3b502d262053594ea6f07fca20f95ac81bffc6b3ddd42abee15a536c5ad"} Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.290057 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.384904 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns2xt\" (UniqueName: \"kubernetes.io/projected/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-kube-api-access-ns2xt\") pod \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.384985 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-catalog-content\") pod \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.385006 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-utilities\") pod \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\" (UID: \"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1\") " Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.385586 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-utilities" (OuterVolumeSpecName: "utilities") pod "c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" (UID: "c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.407986 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-kube-api-access-ns2xt" (OuterVolumeSpecName: "kube-api-access-ns2xt") pod "c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" (UID: "c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1"). InnerVolumeSpecName "kube-api-access-ns2xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.460156 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" (UID: "c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.487666 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns2xt\" (UniqueName: \"kubernetes.io/projected/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-kube-api-access-ns2xt\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.487702 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:47 crc kubenswrapper[4680]: I0126 17:11:47.487711 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.215505 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s7hj" event={"ID":"c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1","Type":"ContainerDied","Data":"077cfb59a4cbe824f580914e0ae761ca0b5b93cc28ad898fc8119b6c5fee8d65"} Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.215807 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s7hj" Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.215880 4680 scope.go:117] "RemoveContainer" containerID="32d1b3b502d262053594ea6f07fca20f95ac81bffc6b3ddd42abee15a536c5ad" Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.259264 4680 scope.go:117] "RemoveContainer" containerID="f38cf59faa67e3e3c55837e0608bf48d9f79900847cc65632f85d736b388c5aa" Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.273963 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5s7hj"] Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.299952 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5s7hj"] Jan 26 17:11:48 crc kubenswrapper[4680]: I0126 17:11:48.449200 4680 scope.go:117] "RemoveContainer" containerID="7316a1c8390bfa44461122199ee9bca47c4b56db2c346d58b43f0a436813a207" Jan 26 17:11:49 crc kubenswrapper[4680]: I0126 17:11:49.181701 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" path="/var/lib/kubelet/pods/c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1/volumes" Jan 26 17:11:51 crc kubenswrapper[4680]: I0126 17:11:51.170776 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:11:51 crc kubenswrapper[4680]: E0126 17:11:51.171596 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:11:52 crc kubenswrapper[4680]: I0126 17:11:52.191277 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mwhz9" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" probeResult="failure" output=< Jan 26 17:11:52 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:11:52 crc kubenswrapper[4680]: > Jan 26 17:12:00 crc kubenswrapper[4680]: I0126 17:12:00.344686 4680 generic.go:334] "Generic (PLEG): container finished" podID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerID="64e42ddfabd3b4d258aeb6a2cff61be02c5f44430e638e86c5c9130aa91ad795" exitCode=1 Jan 26 17:12:00 crc kubenswrapper[4680]: I0126 17:12:00.344770 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerDied","Data":"64e42ddfabd3b4d258aeb6a2cff61be02c5f44430e638e86c5c9130aa91ad795"} Jan 26 17:12:00 crc kubenswrapper[4680]: I0126 17:12:00.347092 4680 scope.go:117] "RemoveContainer" containerID="64e42ddfabd3b4d258aeb6a2cff61be02c5f44430e638e86c5c9130aa91ad795" Jan 26 17:12:01 crc kubenswrapper[4680]: I0126 17:12:01.356749 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" event={"ID":"c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c","Type":"ContainerStarted","Data":"a5b8ca47efc7c4d0f32e65613b573502e8e8511d7e89445a0f823b6ba44f1f85"} Jan 26 17:12:02 crc kubenswrapper[4680]: I0126 17:12:02.168419 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mwhz9" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" probeResult="failure" output=< Jan 26 17:12:02 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:12:02 crc kubenswrapper[4680]: > Jan 26 17:12:05 crc kubenswrapper[4680]: I0126 17:12:05.178640 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:12:05 crc kubenswrapper[4680]: E0126 17:12:05.179748 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:12:11 crc kubenswrapper[4680]: I0126 17:12:11.162278 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:12:11 crc kubenswrapper[4680]: I0126 17:12:11.212926 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:12:11 crc kubenswrapper[4680]: I0126 17:12:11.261436 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mwhz9"] Jan 26 17:12:12 crc kubenswrapper[4680]: I0126 17:12:12.472267 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mwhz9" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" containerID="cri-o://087c66d8e9fc28a61b1e6447be6f2461251ac2d22c082a6d49af2641f44fcfbb" gracePeriod=2 Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.495394 4680 generic.go:334] "Generic (PLEG): container finished" podID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerID="087c66d8e9fc28a61b1e6447be6f2461251ac2d22c082a6d49af2641f44fcfbb" exitCode=0 Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.496829 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerDied","Data":"087c66d8e9fc28a61b1e6447be6f2461251ac2d22c082a6d49af2641f44fcfbb"} Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.585379 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.695556 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v8qx\" (UniqueName: \"kubernetes.io/projected/d63bab54-5ae3-4e15-a517-eae441a6dbbf-kube-api-access-8v8qx\") pod \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.696358 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-utilities\") pod \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.696578 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-catalog-content\") pod \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\" (UID: \"d63bab54-5ae3-4e15-a517-eae441a6dbbf\") " Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.698192 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-utilities" (OuterVolumeSpecName: "utilities") pod "d63bab54-5ae3-4e15-a517-eae441a6dbbf" (UID: "d63bab54-5ae3-4e15-a517-eae441a6dbbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.715503 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63bab54-5ae3-4e15-a517-eae441a6dbbf-kube-api-access-8v8qx" (OuterVolumeSpecName: "kube-api-access-8v8qx") pod "d63bab54-5ae3-4e15-a517-eae441a6dbbf" (UID: "d63bab54-5ae3-4e15-a517-eae441a6dbbf"). InnerVolumeSpecName "kube-api-access-8v8qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.799996 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v8qx\" (UniqueName: \"kubernetes.io/projected/d63bab54-5ae3-4e15-a517-eae441a6dbbf-kube-api-access-8v8qx\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.800029 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.815768 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d63bab54-5ae3-4e15-a517-eae441a6dbbf" (UID: "d63bab54-5ae3-4e15-a517-eae441a6dbbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:12:13 crc kubenswrapper[4680]: I0126 17:12:13.902020 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63bab54-5ae3-4e15-a517-eae441a6dbbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.517914 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwhz9" event={"ID":"d63bab54-5ae3-4e15-a517-eae441a6dbbf","Type":"ContainerDied","Data":"9f7b6efbda9a33829dae57d89461d0783462a5b990ba3b351c8e6d206951d6a0"} Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.517968 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwhz9" Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.518030 4680 scope.go:117] "RemoveContainer" containerID="087c66d8e9fc28a61b1e6447be6f2461251ac2d22c082a6d49af2641f44fcfbb" Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.551556 4680 scope.go:117] "RemoveContainer" containerID="87e3480b334053321ae94c5eefca8012e923f2108f20a8b8dff4ca67e1b2df5f" Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.557593 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mwhz9"] Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.573413 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mwhz9"] Jan 26 17:12:14 crc kubenswrapper[4680]: I0126 17:12:14.743688 4680 scope.go:117] "RemoveContainer" containerID="93461d1f4a346449d3badd0070500f8e9720c1a855704d81c37e74190648a7f3" Jan 26 17:12:15 crc kubenswrapper[4680]: I0126 17:12:15.181084 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" path="/var/lib/kubelet/pods/d63bab54-5ae3-4e15-a517-eae441a6dbbf/volumes" Jan 26 17:12:17 crc kubenswrapper[4680]: I0126 17:12:17.170388 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:12:17 crc kubenswrapper[4680]: E0126 17:12:17.171899 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:12:29 crc kubenswrapper[4680]: I0126 17:12:29.169612 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:12:29 crc kubenswrapper[4680]: E0126 17:12:29.170456 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:12:42 crc kubenswrapper[4680]: I0126 17:12:42.170319 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:12:42 crc kubenswrapper[4680]: E0126 17:12:42.172287 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.382695 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-867c94b99c-zl66f"] Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.388916 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389233 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389468 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389480 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389537 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="extract-utilities" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389548 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="extract-utilities" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389578 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="extract-utilities" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389586 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="extract-utilities" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389866 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389883 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389897 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="extract-content" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389904 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="extract-content" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389915 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="extract-utilities" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389922 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="extract-utilities" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389931 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="extract-content" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.389938 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="extract-content" Jan 26 17:12:48 crc kubenswrapper[4680]: E0126 17:12:48.389955 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="extract-content" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.390055 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="extract-content" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.391689 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63bab54-5ae3-4e15-a517-eae441a6dbbf" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.391985 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d682c0-50c8-4fba-9f4d-b0f0e2abdbb1" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.392210 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7906ec80-b796-4c46-9867-cf61576a73b7" containerName="registry-server" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.394607 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556382 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-httpd-config\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-config\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556512 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-combined-ca-bundle\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556578 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-internal-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556608 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6sv\" (UniqueName: \"kubernetes.io/projected/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-kube-api-access-dk6sv\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556631 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-ovndb-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.556668 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-public-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.597343 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-867c94b99c-zl66f"] Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658673 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-internal-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk6sv\" (UniqueName: \"kubernetes.io/projected/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-kube-api-access-dk6sv\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658756 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-ovndb-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658791 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-public-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658852 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-httpd-config\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658877 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-config\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.658906 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-combined-ca-bundle\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.667646 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-config\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.669244 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-internal-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.670983 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-public-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.671625 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-ovndb-tls-certs\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.679446 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-combined-ca-bundle\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.680837 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-httpd-config\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.686655 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk6sv\" (UniqueName: \"kubernetes.io/projected/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-kube-api-access-dk6sv\") pod \"neutron-867c94b99c-zl66f\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:48 crc kubenswrapper[4680]: I0126 17:12:48.720747 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:49 crc kubenswrapper[4680]: I0126 17:12:49.514758 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-867c94b99c-zl66f"] Jan 26 17:12:49 crc kubenswrapper[4680]: I0126 17:12:49.872126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867c94b99c-zl66f" event={"ID":"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0","Type":"ContainerStarted","Data":"7a585c76fd49318bd18a06bd5ed5a5753ec4135e9e63b2f9fad5904839c52463"} Jan 26 17:12:49 crc kubenswrapper[4680]: I0126 17:12:49.872816 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867c94b99c-zl66f" event={"ID":"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0","Type":"ContainerStarted","Data":"3d807a9724e1aa08df42854fc585c6dab7f237c9a14070cd7eaa6b2d0fe436fd"} Jan 26 17:12:50 crc kubenswrapper[4680]: I0126 17:12:50.886723 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867c94b99c-zl66f" event={"ID":"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0","Type":"ContainerStarted","Data":"3029c345b20e6aed1bff7196b1f3c3403d59164aeba3b12350394380bca106a8"} Jan 26 17:12:50 crc kubenswrapper[4680]: I0126 17:12:50.887116 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:12:57 crc kubenswrapper[4680]: I0126 17:12:57.169345 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:12:57 crc kubenswrapper[4680]: E0126 17:12:57.170275 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:13:12 crc kubenswrapper[4680]: I0126 17:13:12.170203 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:13:12 crc kubenswrapper[4680]: E0126 17:13:12.171002 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:13:18 crc kubenswrapper[4680]: I0126 17:13:18.739960 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-867c94b99c-zl66f" Jan 26 17:13:18 crc kubenswrapper[4680]: I0126 17:13:18.760327 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-867c94b99c-zl66f" podStartSLOduration=30.759839781 podStartE2EDuration="30.759839781s" podCreationTimestamp="2026-01-26 17:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:12:50.917262085 +0000 UTC m=+4046.078534354" watchObservedRunningTime="2026-01-26 17:13:18.759839781 +0000 UTC m=+4073.921112050" Jan 26 17:13:18 crc kubenswrapper[4680]: I0126 17:13:18.881492 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9b85fd5c9-zlmxc"] Jan 26 17:13:18 crc kubenswrapper[4680]: I0126 17:13:18.881721 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9b85fd5c9-zlmxc" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-api" containerID="cri-o://31553f74f3fe8713c1f51db9a2294200098e1084990f8a2c1e049fe4fd00fe89" gracePeriod=30 Jan 26 17:13:18 crc kubenswrapper[4680]: I0126 17:13:18.883231 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9b85fd5c9-zlmxc" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-httpd" containerID="cri-o://c9a630ddf8e70d5b7311fcbc603b921cc92ca1efeddb7e2099dfd6d06e4d31ed" gracePeriod=30 Jan 26 17:13:20 crc kubenswrapper[4680]: I0126 17:13:20.193348 4680 generic.go:334] "Generic (PLEG): container finished" podID="4e158887-8689-4b75-a22c-fa6e8033190f" containerID="c9a630ddf8e70d5b7311fcbc603b921cc92ca1efeddb7e2099dfd6d06e4d31ed" exitCode=0 Jan 26 17:13:20 crc kubenswrapper[4680]: I0126 17:13:20.193563 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b85fd5c9-zlmxc" event={"ID":"4e158887-8689-4b75-a22c-fa6e8033190f","Type":"ContainerDied","Data":"c9a630ddf8e70d5b7311fcbc603b921cc92ca1efeddb7e2099dfd6d06e4d31ed"} Jan 26 17:13:25 crc kubenswrapper[4680]: I0126 17:13:25.179400 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:13:25 crc kubenswrapper[4680]: E0126 17:13:25.181024 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.272711 4680 generic.go:334] "Generic (PLEG): container finished" podID="4e158887-8689-4b75-a22c-fa6e8033190f" containerID="31553f74f3fe8713c1f51db9a2294200098e1084990f8a2c1e049fe4fd00fe89" exitCode=0 Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.275355 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b85fd5c9-zlmxc" event={"ID":"4e158887-8689-4b75-a22c-fa6e8033190f","Type":"ContainerDied","Data":"31553f74f3fe8713c1f51db9a2294200098e1084990f8a2c1e049fe4fd00fe89"} Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.275703 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b85fd5c9-zlmxc" event={"ID":"4e158887-8689-4b75-a22c-fa6e8033190f","Type":"ContainerDied","Data":"4a27b2443e2af3bb3491267b2f835e0edf65adecf2d28a887650abc5ec680bf9"} Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.275834 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a27b2443e2af3bb3491267b2f835e0edf65adecf2d28a887650abc5ec680bf9" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.348632 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.426955 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-ovndb-tls-certs\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.427399 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-httpd-config\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.427488 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-internal-tls-certs\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.427519 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-config\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.427555 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-public-tls-certs\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.427662 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-combined-ca-bundle\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.427732 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54jk2\" (UniqueName: \"kubernetes.io/projected/4e158887-8689-4b75-a22c-fa6e8033190f-kube-api-access-54jk2\") pod \"4e158887-8689-4b75-a22c-fa6e8033190f\" (UID: \"4e158887-8689-4b75-a22c-fa6e8033190f\") " Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.521362 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.521570 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e158887-8689-4b75-a22c-fa6e8033190f-kube-api-access-54jk2" (OuterVolumeSpecName: "kube-api-access-54jk2") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "kube-api-access-54jk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.531126 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54jk2\" (UniqueName: \"kubernetes.io/projected/4e158887-8689-4b75-a22c-fa6e8033190f-kube-api-access-54jk2\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.531165 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.553334 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-config" (OuterVolumeSpecName: "config") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.560140 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.563566 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.564016 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.589522 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4e158887-8689-4b75-a22c-fa6e8033190f" (UID: "4e158887-8689-4b75-a22c-fa6e8033190f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.633589 4680 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.633651 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.633665 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.633679 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:27 crc kubenswrapper[4680]: I0126 17:13:27.633691 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e158887-8689-4b75-a22c-fa6e8033190f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:28 crc kubenswrapper[4680]: I0126 17:13:28.284364 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b85fd5c9-zlmxc" Jan 26 17:13:28 crc kubenswrapper[4680]: I0126 17:13:28.324925 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9b85fd5c9-zlmxc"] Jan 26 17:13:28 crc kubenswrapper[4680]: I0126 17:13:28.336481 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9b85fd5c9-zlmxc"] Jan 26 17:13:29 crc kubenswrapper[4680]: I0126 17:13:29.182155 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" path="/var/lib/kubelet/pods/4e158887-8689-4b75-a22c-fa6e8033190f/volumes" Jan 26 17:13:40 crc kubenswrapper[4680]: I0126 17:13:40.169950 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:13:40 crc kubenswrapper[4680]: E0126 17:13:40.170899 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:13:46 crc kubenswrapper[4680]: I0126 17:13:46.425083 4680 scope.go:117] "RemoveContainer" containerID="31553f74f3fe8713c1f51db9a2294200098e1084990f8a2c1e049fe4fd00fe89" Jan 26 17:13:46 crc kubenswrapper[4680]: I0126 17:13:46.465338 4680 scope.go:117] "RemoveContainer" containerID="c9a630ddf8e70d5b7311fcbc603b921cc92ca1efeddb7e2099dfd6d06e4d31ed" Jan 26 17:13:52 crc kubenswrapper[4680]: I0126 17:13:52.170168 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:13:52 crc kubenswrapper[4680]: E0126 17:13:52.170935 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:14:07 crc kubenswrapper[4680]: I0126 17:14:07.170490 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:14:07 crc kubenswrapper[4680]: E0126 17:14:07.171286 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:14:22 crc kubenswrapper[4680]: I0126 17:14:22.170033 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:14:22 crc kubenswrapper[4680]: I0126 17:14:22.781289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"6f14729e3f6e9d234a2b92193451a1fa52c126acdb64367b1b680d12f1ee4f2f"} Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.196646 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn"] Jan 26 17:15:00 crc kubenswrapper[4680]: E0126 17:15:00.197960 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-api" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.197983 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-api" Jan 26 17:15:00 crc kubenswrapper[4680]: E0126 17:15:00.198031 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-httpd" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.198040 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-httpd" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.198282 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-httpd" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.198323 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e158887-8689-4b75-a22c-fa6e8033190f" containerName="neutron-api" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.199335 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.206685 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.209008 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn"] Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.210145 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.227928 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487cee21-a41e-45a1-a79e-8335c55fcdf1-config-volume\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.228325 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487cee21-a41e-45a1-a79e-8335c55fcdf1-secret-volume\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.228358 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4tq\" (UniqueName: \"kubernetes.io/projected/487cee21-a41e-45a1-a79e-8335c55fcdf1-kube-api-access-6x4tq\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.329656 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487cee21-a41e-45a1-a79e-8335c55fcdf1-secret-volume\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.329760 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4tq\" (UniqueName: \"kubernetes.io/projected/487cee21-a41e-45a1-a79e-8335c55fcdf1-kube-api-access-6x4tq\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.329836 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487cee21-a41e-45a1-a79e-8335c55fcdf1-config-volume\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.331111 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487cee21-a41e-45a1-a79e-8335c55fcdf1-config-volume\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.336773 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487cee21-a41e-45a1-a79e-8335c55fcdf1-secret-volume\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.349021 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4tq\" (UniqueName: \"kubernetes.io/projected/487cee21-a41e-45a1-a79e-8335c55fcdf1-kube-api-access-6x4tq\") pod \"collect-profiles-29490795-dnjqn\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.530206 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:00 crc kubenswrapper[4680]: I0126 17:15:00.977405 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn"] Jan 26 17:15:01 crc kubenswrapper[4680]: I0126 17:15:01.081879 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" event={"ID":"487cee21-a41e-45a1-a79e-8335c55fcdf1","Type":"ContainerStarted","Data":"51beff2cb7a99565e66cf7625b0ea377fb09dcf145f973267e1e9ac4252e444f"} Jan 26 17:15:02 crc kubenswrapper[4680]: I0126 17:15:02.092198 4680 generic.go:334] "Generic (PLEG): container finished" podID="487cee21-a41e-45a1-a79e-8335c55fcdf1" containerID="df71e1dfc51f65f6a2ff2b3651a20549d32529e9973236f52bd60e44dac02023" exitCode=0 Jan 26 17:15:02 crc kubenswrapper[4680]: I0126 17:15:02.092248 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" event={"ID":"487cee21-a41e-45a1-a79e-8335c55fcdf1","Type":"ContainerDied","Data":"df71e1dfc51f65f6a2ff2b3651a20549d32529e9973236f52bd60e44dac02023"} Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.559883 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.595671 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487cee21-a41e-45a1-a79e-8335c55fcdf1-config-volume\") pod \"487cee21-a41e-45a1-a79e-8335c55fcdf1\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.595792 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487cee21-a41e-45a1-a79e-8335c55fcdf1-secret-volume\") pod \"487cee21-a41e-45a1-a79e-8335c55fcdf1\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.596061 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4tq\" (UniqueName: \"kubernetes.io/projected/487cee21-a41e-45a1-a79e-8335c55fcdf1-kube-api-access-6x4tq\") pod \"487cee21-a41e-45a1-a79e-8335c55fcdf1\" (UID: \"487cee21-a41e-45a1-a79e-8335c55fcdf1\") " Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.596835 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487cee21-a41e-45a1-a79e-8335c55fcdf1-config-volume" (OuterVolumeSpecName: "config-volume") pod "487cee21-a41e-45a1-a79e-8335c55fcdf1" (UID: "487cee21-a41e-45a1-a79e-8335c55fcdf1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.605239 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487cee21-a41e-45a1-a79e-8335c55fcdf1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "487cee21-a41e-45a1-a79e-8335c55fcdf1" (UID: "487cee21-a41e-45a1-a79e-8335c55fcdf1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.605277 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487cee21-a41e-45a1-a79e-8335c55fcdf1-kube-api-access-6x4tq" (OuterVolumeSpecName: "kube-api-access-6x4tq") pod "487cee21-a41e-45a1-a79e-8335c55fcdf1" (UID: "487cee21-a41e-45a1-a79e-8335c55fcdf1"). InnerVolumeSpecName "kube-api-access-6x4tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.698959 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487cee21-a41e-45a1-a79e-8335c55fcdf1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.699034 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487cee21-a41e-45a1-a79e-8335c55fcdf1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:03 crc kubenswrapper[4680]: I0126 17:15:03.699047 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4tq\" (UniqueName: \"kubernetes.io/projected/487cee21-a41e-45a1-a79e-8335c55fcdf1-kube-api-access-6x4tq\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:04 crc kubenswrapper[4680]: I0126 17:15:04.112312 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" event={"ID":"487cee21-a41e-45a1-a79e-8335c55fcdf1","Type":"ContainerDied","Data":"51beff2cb7a99565e66cf7625b0ea377fb09dcf145f973267e1e9ac4252e444f"} Jan 26 17:15:04 crc kubenswrapper[4680]: I0126 17:15:04.112676 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51beff2cb7a99565e66cf7625b0ea377fb09dcf145f973267e1e9ac4252e444f" Jan 26 17:15:04 crc kubenswrapper[4680]: I0126 17:15:04.112376 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn" Jan 26 17:15:04 crc kubenswrapper[4680]: I0126 17:15:04.639864 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw"] Jan 26 17:15:04 crc kubenswrapper[4680]: I0126 17:15:04.649354 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-29chw"] Jan 26 17:15:05 crc kubenswrapper[4680]: I0126 17:15:05.223905 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f629c8d9-c945-477d-ad2b-6c397e93b74c" path="/var/lib/kubelet/pods/f629c8d9-c945-477d-ad2b-6c397e93b74c/volumes" Jan 26 17:15:46 crc kubenswrapper[4680]: I0126 17:15:46.547685 4680 scope.go:117] "RemoveContainer" containerID="7350ab58426ad93630b2e29e1128d5354aa7778fb09749bba8e30faa99efdbe8" Jan 26 17:16:43 crc kubenswrapper[4680]: I0126 17:16:43.823716 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:16:46 crc kubenswrapper[4680]: I0126 17:16:46.981309 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:16:46 crc kubenswrapper[4680]: I0126 17:16:46.981928 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:17:16 crc kubenswrapper[4680]: I0126 17:17:16.981186 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:17:16 crc kubenswrapper[4680]: I0126 17:17:16.981735 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:17:46 crc kubenswrapper[4680]: I0126 17:17:46.980515 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:17:46 crc kubenswrapper[4680]: I0126 17:17:46.981753 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:17:46 crc kubenswrapper[4680]: I0126 17:17:46.981881 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:17:46 crc kubenswrapper[4680]: I0126 17:17:46.983591 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f14729e3f6e9d234a2b92193451a1fa52c126acdb64367b1b680d12f1ee4f2f"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:17:46 crc kubenswrapper[4680]: I0126 17:17:46.983727 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://6f14729e3f6e9d234a2b92193451a1fa52c126acdb64367b1b680d12f1ee4f2f" gracePeriod=600 Jan 26 17:17:47 crc kubenswrapper[4680]: I0126 17:17:47.600626 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="6f14729e3f6e9d234a2b92193451a1fa52c126acdb64367b1b680d12f1ee4f2f" exitCode=0 Jan 26 17:17:47 crc kubenswrapper[4680]: I0126 17:17:47.600680 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"6f14729e3f6e9d234a2b92193451a1fa52c126acdb64367b1b680d12f1ee4f2f"} Jan 26 17:17:47 crc kubenswrapper[4680]: I0126 17:17:47.600724 4680 scope.go:117] "RemoveContainer" containerID="7c46628a1d3fd27aba92e37ae7f104b202e520800b57135d27163bfd5ea83342" Jan 26 17:17:48 crc kubenswrapper[4680]: I0126 17:17:48.613124 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679"} Jan 26 17:20:16 crc kubenswrapper[4680]: I0126 17:20:16.980751 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:20:16 crc kubenswrapper[4680]: I0126 17:20:16.981545 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.443747 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8nlp"] Jan 26 17:20:39 crc kubenswrapper[4680]: E0126 17:20:39.444856 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487cee21-a41e-45a1-a79e-8335c55fcdf1" containerName="collect-profiles" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.444876 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="487cee21-a41e-45a1-a79e-8335c55fcdf1" containerName="collect-profiles" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.445153 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="487cee21-a41e-45a1-a79e-8335c55fcdf1" containerName="collect-profiles" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.446848 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.469335 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8nlp"] Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.601348 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-catalog-content\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.601433 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvnd\" (UniqueName: \"kubernetes.io/projected/081abe55-c9da-4151-a697-5f6fc1bb386f-kube-api-access-tpvnd\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.601528 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-utilities\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.703552 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-catalog-content\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.703630 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpvnd\" (UniqueName: \"kubernetes.io/projected/081abe55-c9da-4151-a697-5f6fc1bb386f-kube-api-access-tpvnd\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.703680 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-utilities\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.704895 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-catalog-content\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.704959 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-utilities\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.724695 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpvnd\" (UniqueName: \"kubernetes.io/projected/081abe55-c9da-4151-a697-5f6fc1bb386f-kube-api-access-tpvnd\") pod \"certified-operators-j8nlp\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:39 crc kubenswrapper[4680]: I0126 17:20:39.769283 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:40 crc kubenswrapper[4680]: I0126 17:20:40.271350 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8nlp"] Jan 26 17:20:41 crc kubenswrapper[4680]: I0126 17:20:41.112776 4680 generic.go:334] "Generic (PLEG): container finished" podID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerID="4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3" exitCode=0 Jan 26 17:20:41 crc kubenswrapper[4680]: I0126 17:20:41.112951 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerDied","Data":"4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3"} Jan 26 17:20:41 crc kubenswrapper[4680]: I0126 17:20:41.113468 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerStarted","Data":"3f2a4e62b67857bc77f3a868bc3ad43c12a6ac38e994ec95293f5100e980c097"} Jan 26 17:20:41 crc kubenswrapper[4680]: I0126 17:20:41.125486 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.068723 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8ffh"] Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.071988 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.090205 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8ffh"] Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.207872 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-utilities\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.207915 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29c4x\" (UniqueName: \"kubernetes.io/projected/ef8d17f4-09de-4b7d-9643-da6613aa9525-kube-api-access-29c4x\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.208462 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-catalog-content\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.310327 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-catalog-content\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.310458 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-utilities\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.310483 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29c4x\" (UniqueName: \"kubernetes.io/projected/ef8d17f4-09de-4b7d-9643-da6613aa9525-kube-api-access-29c4x\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.310950 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-catalog-content\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.314130 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-utilities\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.329356 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29c4x\" (UniqueName: \"kubernetes.io/projected/ef8d17f4-09de-4b7d-9643-da6613aa9525-kube-api-access-29c4x\") pod \"redhat-marketplace-g8ffh\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:44 crc kubenswrapper[4680]: I0126 17:20:44.394183 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:45 crc kubenswrapper[4680]: I0126 17:20:45.271685 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8ffh"] Jan 26 17:20:45 crc kubenswrapper[4680]: W0126 17:20:45.284552 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef8d17f4_09de_4b7d_9643_da6613aa9525.slice/crio-35e86e6ed9f21ebb0a0f0f26097be424a2c811e9e4fbc4f60bc32b507606f31d WatchSource:0}: Error finding container 35e86e6ed9f21ebb0a0f0f26097be424a2c811e9e4fbc4f60bc32b507606f31d: Status 404 returned error can't find the container with id 35e86e6ed9f21ebb0a0f0f26097be424a2c811e9e4fbc4f60bc32b507606f31d Jan 26 17:20:46 crc kubenswrapper[4680]: I0126 17:20:46.171796 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerStarted","Data":"4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8"} Jan 26 17:20:46 crc kubenswrapper[4680]: I0126 17:20:46.173899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerStarted","Data":"3ce1716896a0e466ed92ba4f720cba8f6b122c13f78857973db5fdba71dcb66e"} Jan 26 17:20:46 crc kubenswrapper[4680]: I0126 17:20:46.173925 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerStarted","Data":"35e86e6ed9f21ebb0a0f0f26097be424a2c811e9e4fbc4f60bc32b507606f31d"} Jan 26 17:20:46 crc kubenswrapper[4680]: I0126 17:20:46.981097 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:20:46 crc kubenswrapper[4680]: I0126 17:20:46.982396 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:20:47 crc kubenswrapper[4680]: I0126 17:20:47.188615 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerID="3ce1716896a0e466ed92ba4f720cba8f6b122c13f78857973db5fdba71dcb66e" exitCode=0 Jan 26 17:20:47 crc kubenswrapper[4680]: I0126 17:20:47.188716 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerDied","Data":"3ce1716896a0e466ed92ba4f720cba8f6b122c13f78857973db5fdba71dcb66e"} Jan 26 17:20:50 crc kubenswrapper[4680]: I0126 17:20:50.237828 4680 generic.go:334] "Generic (PLEG): container finished" podID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerID="4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8" exitCode=0 Jan 26 17:20:50 crc kubenswrapper[4680]: I0126 17:20:50.238780 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerDied","Data":"4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8"} Jan 26 17:20:51 crc kubenswrapper[4680]: I0126 17:20:51.257006 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerStarted","Data":"bd3d85b4ada473bf3a8ce6772b3ac5a836dba30954ddd11362fc956fc7240c8f"} Jan 26 17:20:52 crc kubenswrapper[4680]: I0126 17:20:52.269568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerStarted","Data":"b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7"} Jan 26 17:20:52 crc kubenswrapper[4680]: I0126 17:20:52.272027 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerID="bd3d85b4ada473bf3a8ce6772b3ac5a836dba30954ddd11362fc956fc7240c8f" exitCode=0 Jan 26 17:20:52 crc kubenswrapper[4680]: I0126 17:20:52.272095 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerDied","Data":"bd3d85b4ada473bf3a8ce6772b3ac5a836dba30954ddd11362fc956fc7240c8f"} Jan 26 17:20:52 crc kubenswrapper[4680]: I0126 17:20:52.294371 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8nlp" podStartSLOduration=3.412616607 podStartE2EDuration="13.29434936s" podCreationTimestamp="2026-01-26 17:20:39 +0000 UTC" firstStartedPulling="2026-01-26 17:20:41.116599705 +0000 UTC m=+4516.277871974" lastFinishedPulling="2026-01-26 17:20:50.998332458 +0000 UTC m=+4526.159604727" observedRunningTime="2026-01-26 17:20:52.29224019 +0000 UTC m=+4527.453512469" watchObservedRunningTime="2026-01-26 17:20:52.29434936 +0000 UTC m=+4527.455621629" Jan 26 17:20:53 crc kubenswrapper[4680]: I0126 17:20:53.284464 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerStarted","Data":"912ba652b7be3b2c36bcb8a44e9b5837b96f95ed72a41f85a6d278136ade10ba"} Jan 26 17:20:53 crc kubenswrapper[4680]: I0126 17:20:53.305822 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8ffh" podStartSLOduration=3.785912053 podStartE2EDuration="9.305798432s" podCreationTimestamp="2026-01-26 17:20:44 +0000 UTC" firstStartedPulling="2026-01-26 17:20:47.190585587 +0000 UTC m=+4522.351857856" lastFinishedPulling="2026-01-26 17:20:52.710471966 +0000 UTC m=+4527.871744235" observedRunningTime="2026-01-26 17:20:53.302251231 +0000 UTC m=+4528.463523510" watchObservedRunningTime="2026-01-26 17:20:53.305798432 +0000 UTC m=+4528.467070701" Jan 26 17:20:54 crc kubenswrapper[4680]: I0126 17:20:54.394430 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:54 crc kubenswrapper[4680]: I0126 17:20:54.395091 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:20:55 crc kubenswrapper[4680]: I0126 17:20:55.443178 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g8ffh" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="registry-server" probeResult="failure" output=< Jan 26 17:20:55 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:20:55 crc kubenswrapper[4680]: > Jan 26 17:20:59 crc kubenswrapper[4680]: I0126 17:20:59.769579 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:59 crc kubenswrapper[4680]: I0126 17:20:59.770948 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:20:59 crc kubenswrapper[4680]: I0126 17:20:59.814054 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:21:00 crc kubenswrapper[4680]: I0126 17:21:00.398870 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:21:00 crc kubenswrapper[4680]: I0126 17:21:00.458693 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8nlp"] Jan 26 17:21:02 crc kubenswrapper[4680]: I0126 17:21:02.367910 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8nlp" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="registry-server" containerID="cri-o://b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7" gracePeriod=2 Jan 26 17:21:02 crc kubenswrapper[4680]: I0126 17:21:02.943192 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.135730 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-catalog-content\") pod \"081abe55-c9da-4151-a697-5f6fc1bb386f\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.135919 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpvnd\" (UniqueName: \"kubernetes.io/projected/081abe55-c9da-4151-a697-5f6fc1bb386f-kube-api-access-tpvnd\") pod \"081abe55-c9da-4151-a697-5f6fc1bb386f\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.136114 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-utilities\") pod \"081abe55-c9da-4151-a697-5f6fc1bb386f\" (UID: \"081abe55-c9da-4151-a697-5f6fc1bb386f\") " Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.137304 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-utilities" (OuterVolumeSpecName: "utilities") pod "081abe55-c9da-4151-a697-5f6fc1bb386f" (UID: "081abe55-c9da-4151-a697-5f6fc1bb386f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.156311 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081abe55-c9da-4151-a697-5f6fc1bb386f-kube-api-access-tpvnd" (OuterVolumeSpecName: "kube-api-access-tpvnd") pod "081abe55-c9da-4151-a697-5f6fc1bb386f" (UID: "081abe55-c9da-4151-a697-5f6fc1bb386f"). InnerVolumeSpecName "kube-api-access-tpvnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.198491 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "081abe55-c9da-4151-a697-5f6fc1bb386f" (UID: "081abe55-c9da-4151-a697-5f6fc1bb386f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.238035 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.238293 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081abe55-c9da-4151-a697-5f6fc1bb386f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.238351 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpvnd\" (UniqueName: \"kubernetes.io/projected/081abe55-c9da-4151-a697-5f6fc1bb386f-kube-api-access-tpvnd\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.380137 4680 generic.go:334] "Generic (PLEG): container finished" podID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerID="b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7" exitCode=0 Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.380188 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerDied","Data":"b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7"} Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.380491 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8nlp" event={"ID":"081abe55-c9da-4151-a697-5f6fc1bb386f","Type":"ContainerDied","Data":"3f2a4e62b67857bc77f3a868bc3ad43c12a6ac38e994ec95293f5100e980c097"} Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.380222 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8nlp" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.380515 4680 scope.go:117] "RemoveContainer" containerID="b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.425184 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8nlp"] Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.426959 4680 scope.go:117] "RemoveContainer" containerID="4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.454437 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8nlp"] Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.459666 4680 scope.go:117] "RemoveContainer" containerID="4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.508440 4680 scope.go:117] "RemoveContainer" containerID="b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7" Jan 26 17:21:03 crc kubenswrapper[4680]: E0126 17:21:03.514497 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7\": container with ID starting with b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7 not found: ID does not exist" containerID="b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.514560 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7"} err="failed to get container status \"b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7\": rpc error: code = NotFound desc = could not find container \"b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7\": container with ID starting with b03d19ad5b66efce76aa8c534965bca8b518e9a15dec973f4aba149f5fafd9f7 not found: ID does not exist" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.514590 4680 scope.go:117] "RemoveContainer" containerID="4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8" Jan 26 17:21:03 crc kubenswrapper[4680]: E0126 17:21:03.515617 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8\": container with ID starting with 4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8 not found: ID does not exist" containerID="4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.515651 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8"} err="failed to get container status \"4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8\": rpc error: code = NotFound desc = could not find container \"4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8\": container with ID starting with 4603604fa27dee4607ca7ad79fc327883a1c27dc552a19adf77d360bd4e779c8 not found: ID does not exist" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.515672 4680 scope.go:117] "RemoveContainer" containerID="4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3" Jan 26 17:21:03 crc kubenswrapper[4680]: E0126 17:21:03.515944 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3\": container with ID starting with 4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3 not found: ID does not exist" containerID="4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3" Jan 26 17:21:03 crc kubenswrapper[4680]: I0126 17:21:03.515977 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3"} err="failed to get container status \"4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3\": rpc error: code = NotFound desc = could not find container \"4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3\": container with ID starting with 4c3652be0f59c0c2952b8684b81fc4449d1e10032404cd8f6562c4233968bff3 not found: ID does not exist" Jan 26 17:21:04 crc kubenswrapper[4680]: I0126 17:21:04.445259 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:21:04 crc kubenswrapper[4680]: I0126 17:21:04.498051 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:21:05 crc kubenswrapper[4680]: I0126 17:21:05.182321 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" path="/var/lib/kubelet/pods/081abe55-c9da-4151-a697-5f6fc1bb386f/volumes" Jan 26 17:21:05 crc kubenswrapper[4680]: I0126 17:21:05.455471 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8ffh"] Jan 26 17:21:06 crc kubenswrapper[4680]: I0126 17:21:06.412097 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g8ffh" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="registry-server" containerID="cri-o://912ba652b7be3b2c36bcb8a44e9b5837b96f95ed72a41f85a6d278136ade10ba" gracePeriod=2 Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.433469 4680 generic.go:334] "Generic (PLEG): container finished" podID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerID="912ba652b7be3b2c36bcb8a44e9b5837b96f95ed72a41f85a6d278136ade10ba" exitCode=0 Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.434022 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerDied","Data":"912ba652b7be3b2c36bcb8a44e9b5837b96f95ed72a41f85a6d278136ade10ba"} Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.434057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8ffh" event={"ID":"ef8d17f4-09de-4b7d-9643-da6613aa9525","Type":"ContainerDied","Data":"35e86e6ed9f21ebb0a0f0f26097be424a2c811e9e4fbc4f60bc32b507606f31d"} Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.434112 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e86e6ed9f21ebb0a0f0f26097be424a2c811e9e4fbc4f60bc32b507606f31d" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.487507 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.526876 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29c4x\" (UniqueName: \"kubernetes.io/projected/ef8d17f4-09de-4b7d-9643-da6613aa9525-kube-api-access-29c4x\") pod \"ef8d17f4-09de-4b7d-9643-da6613aa9525\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.527029 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-catalog-content\") pod \"ef8d17f4-09de-4b7d-9643-da6613aa9525\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.527172 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-utilities\") pod \"ef8d17f4-09de-4b7d-9643-da6613aa9525\" (UID: \"ef8d17f4-09de-4b7d-9643-da6613aa9525\") " Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.527875 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-utilities" (OuterVolumeSpecName: "utilities") pod "ef8d17f4-09de-4b7d-9643-da6613aa9525" (UID: "ef8d17f4-09de-4b7d-9643-da6613aa9525"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.528471 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.547599 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8d17f4-09de-4b7d-9643-da6613aa9525-kube-api-access-29c4x" (OuterVolumeSpecName: "kube-api-access-29c4x") pod "ef8d17f4-09de-4b7d-9643-da6613aa9525" (UID: "ef8d17f4-09de-4b7d-9643-da6613aa9525"). InnerVolumeSpecName "kube-api-access-29c4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.552058 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef8d17f4-09de-4b7d-9643-da6613aa9525" (UID: "ef8d17f4-09de-4b7d-9643-da6613aa9525"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.629914 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29c4x\" (UniqueName: \"kubernetes.io/projected/ef8d17f4-09de-4b7d-9643-da6613aa9525-kube-api-access-29c4x\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:07 crc kubenswrapper[4680]: I0126 17:21:07.629954 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d17f4-09de-4b7d-9643-da6613aa9525-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:08 crc kubenswrapper[4680]: I0126 17:21:08.443387 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8ffh" Jan 26 17:21:08 crc kubenswrapper[4680]: I0126 17:21:08.490294 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8ffh"] Jan 26 17:21:08 crc kubenswrapper[4680]: I0126 17:21:08.501706 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8ffh"] Jan 26 17:21:09 crc kubenswrapper[4680]: I0126 17:21:09.180965 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" path="/var/lib/kubelet/pods/ef8d17f4-09de-4b7d-9643-da6613aa9525/volumes" Jan 26 17:21:16 crc kubenswrapper[4680]: I0126 17:21:16.980762 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:21:16 crc kubenswrapper[4680]: I0126 17:21:16.981362 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:21:16 crc kubenswrapper[4680]: I0126 17:21:16.981414 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:21:16 crc kubenswrapper[4680]: I0126 17:21:16.982149 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:21:16 crc kubenswrapper[4680]: I0126 17:21:16.982205 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" gracePeriod=600 Jan 26 17:21:17 crc kubenswrapper[4680]: I0126 17:21:17.544904 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" exitCode=0 Jan 26 17:21:17 crc kubenswrapper[4680]: I0126 17:21:17.544953 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679"} Jan 26 17:21:17 crc kubenswrapper[4680]: I0126 17:21:17.544998 4680 scope.go:117] "RemoveContainer" containerID="6f14729e3f6e9d234a2b92193451a1fa52c126acdb64367b1b680d12f1ee4f2f" Jan 26 17:21:17 crc kubenswrapper[4680]: E0126 17:21:17.894967 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:21:18 crc kubenswrapper[4680]: I0126 17:21:18.555006 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:21:18 crc kubenswrapper[4680]: E0126 17:21:18.555719 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:21:33 crc kubenswrapper[4680]: I0126 17:21:33.181315 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:21:33 crc kubenswrapper[4680]: E0126 17:21:33.182458 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:21:45 crc kubenswrapper[4680]: I0126 17:21:45.176062 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:21:45 crc kubenswrapper[4680]: E0126 17:21:45.176961 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:22:00 crc kubenswrapper[4680]: I0126 17:22:00.170667 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:22:00 crc kubenswrapper[4680]: E0126 17:22:00.171739 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:22:14 crc kubenswrapper[4680]: I0126 17:22:14.169539 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:22:14 crc kubenswrapper[4680]: E0126 17:22:14.170323 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.033025 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pljb4"] Jan 26 17:22:15 crc kubenswrapper[4680]: E0126 17:22:15.034242 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="extract-utilities" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.034349 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="extract-utilities" Jan 26 17:22:15 crc kubenswrapper[4680]: E0126 17:22:15.034846 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="extract-utilities" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.034920 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="extract-utilities" Jan 26 17:22:15 crc kubenswrapper[4680]: E0126 17:22:15.034978 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="registry-server" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.035031 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="registry-server" Jan 26 17:22:15 crc kubenswrapper[4680]: E0126 17:22:15.035113 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="extract-content" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.035170 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="extract-content" Jan 26 17:22:15 crc kubenswrapper[4680]: E0126 17:22:15.035264 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="registry-server" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.035315 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="registry-server" Jan 26 17:22:15 crc kubenswrapper[4680]: E0126 17:22:15.035367 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="extract-content" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.035417 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="extract-content" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.036092 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8d17f4-09de-4b7d-9643-da6613aa9525" containerName="registry-server" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.036220 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="081abe55-c9da-4151-a697-5f6fc1bb386f" containerName="registry-server" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.038321 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.066638 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pljb4"] Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.089610 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-utilities\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.089788 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6qp\" (UniqueName: \"kubernetes.io/projected/f9d063b7-6cc4-429b-81c4-59a6e2bef098-kube-api-access-7n6qp\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.089839 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-catalog-content\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.193247 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-utilities\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.193495 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6qp\" (UniqueName: \"kubernetes.io/projected/f9d063b7-6cc4-429b-81c4-59a6e2bef098-kube-api-access-7n6qp\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.193517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-catalog-content\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.194427 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-utilities\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.194883 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-catalog-content\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.215837 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6qp\" (UniqueName: \"kubernetes.io/projected/f9d063b7-6cc4-429b-81c4-59a6e2bef098-kube-api-access-7n6qp\") pod \"community-operators-pljb4\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.361594 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:15 crc kubenswrapper[4680]: I0126 17:22:15.843197 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pljb4"] Jan 26 17:22:15 crc kubenswrapper[4680]: W0126 17:22:15.850415 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d063b7_6cc4_429b_81c4_59a6e2bef098.slice/crio-c891001935d0cdc74994296ab245be2a95ee0204ed042b0a705291acfa6d9b44 WatchSource:0}: Error finding container c891001935d0cdc74994296ab245be2a95ee0204ed042b0a705291acfa6d9b44: Status 404 returned error can't find the container with id c891001935d0cdc74994296ab245be2a95ee0204ed042b0a705291acfa6d9b44 Jan 26 17:22:16 crc kubenswrapper[4680]: I0126 17:22:16.067811 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerStarted","Data":"3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f"} Jan 26 17:22:16 crc kubenswrapper[4680]: I0126 17:22:16.068260 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerStarted","Data":"c891001935d0cdc74994296ab245be2a95ee0204ed042b0a705291acfa6d9b44"} Jan 26 17:22:17 crc kubenswrapper[4680]: I0126 17:22:17.080481 4680 generic.go:334] "Generic (PLEG): container finished" podID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerID="3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f" exitCode=0 Jan 26 17:22:17 crc kubenswrapper[4680]: I0126 17:22:17.080554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerDied","Data":"3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f"} Jan 26 17:22:18 crc kubenswrapper[4680]: I0126 17:22:18.094908 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerStarted","Data":"b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa"} Jan 26 17:22:20 crc kubenswrapper[4680]: I0126 17:22:20.115010 4680 generic.go:334] "Generic (PLEG): container finished" podID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerID="b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa" exitCode=0 Jan 26 17:22:20 crc kubenswrapper[4680]: I0126 17:22:20.115057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerDied","Data":"b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa"} Jan 26 17:22:21 crc kubenswrapper[4680]: I0126 17:22:21.138395 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerStarted","Data":"d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7"} Jan 26 17:22:21 crc kubenswrapper[4680]: I0126 17:22:21.174987 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pljb4" podStartSLOduration=2.634708762 podStartE2EDuration="6.174966344s" podCreationTimestamp="2026-01-26 17:22:15 +0000 UTC" firstStartedPulling="2026-01-26 17:22:17.082777755 +0000 UTC m=+4612.244050024" lastFinishedPulling="2026-01-26 17:22:20.623035337 +0000 UTC m=+4615.784307606" observedRunningTime="2026-01-26 17:22:21.164096565 +0000 UTC m=+4616.325368834" watchObservedRunningTime="2026-01-26 17:22:21.174966344 +0000 UTC m=+4616.336238613" Jan 26 17:22:25 crc kubenswrapper[4680]: I0126 17:22:25.362831 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:25 crc kubenswrapper[4680]: I0126 17:22:25.363449 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:25 crc kubenswrapper[4680]: I0126 17:22:25.410180 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:26 crc kubenswrapper[4680]: I0126 17:22:26.220719 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:26 crc kubenswrapper[4680]: I0126 17:22:26.268195 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pljb4"] Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.170033 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:22:28 crc kubenswrapper[4680]: E0126 17:22:28.170600 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.196051 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pljb4" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="registry-server" containerID="cri-o://d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7" gracePeriod=2 Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.685183 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.749570 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-catalog-content\") pod \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.749745 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n6qp\" (UniqueName: \"kubernetes.io/projected/f9d063b7-6cc4-429b-81c4-59a6e2bef098-kube-api-access-7n6qp\") pod \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.749821 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-utilities\") pod \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\" (UID: \"f9d063b7-6cc4-429b-81c4-59a6e2bef098\") " Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.750787 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-utilities" (OuterVolumeSpecName: "utilities") pod "f9d063b7-6cc4-429b-81c4-59a6e2bef098" (UID: "f9d063b7-6cc4-429b-81c4-59a6e2bef098"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.758774 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d063b7-6cc4-429b-81c4-59a6e2bef098-kube-api-access-7n6qp" (OuterVolumeSpecName: "kube-api-access-7n6qp") pod "f9d063b7-6cc4-429b-81c4-59a6e2bef098" (UID: "f9d063b7-6cc4-429b-81c4-59a6e2bef098"). InnerVolumeSpecName "kube-api-access-7n6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.807363 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9d063b7-6cc4-429b-81c4-59a6e2bef098" (UID: "f9d063b7-6cc4-429b-81c4-59a6e2bef098"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.853377 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.853410 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n6qp\" (UniqueName: \"kubernetes.io/projected/f9d063b7-6cc4-429b-81c4-59a6e2bef098-kube-api-access-7n6qp\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:28 crc kubenswrapper[4680]: I0126 17:22:28.853426 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d063b7-6cc4-429b-81c4-59a6e2bef098-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.206178 4680 generic.go:334] "Generic (PLEG): container finished" podID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerID="d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7" exitCode=0 Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.206235 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerDied","Data":"d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7"} Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.206267 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pljb4" event={"ID":"f9d063b7-6cc4-429b-81c4-59a6e2bef098","Type":"ContainerDied","Data":"c891001935d0cdc74994296ab245be2a95ee0204ed042b0a705291acfa6d9b44"} Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.206288 4680 scope.go:117] "RemoveContainer" containerID="d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.206489 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pljb4" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.237721 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pljb4"] Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.238089 4680 scope.go:117] "RemoveContainer" containerID="b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.252583 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pljb4"] Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.274050 4680 scope.go:117] "RemoveContainer" containerID="3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.306419 4680 scope.go:117] "RemoveContainer" containerID="d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7" Jan 26 17:22:29 crc kubenswrapper[4680]: E0126 17:22:29.307432 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7\": container with ID starting with d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7 not found: ID does not exist" containerID="d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.307491 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7"} err="failed to get container status \"d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7\": rpc error: code = NotFound desc = could not find container \"d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7\": container with ID starting with d68121a4339f0cc950363344f87359e4473c3630acfbd3de5a312771eaacd9f7 not found: ID does not exist" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.307525 4680 scope.go:117] "RemoveContainer" containerID="b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa" Jan 26 17:22:29 crc kubenswrapper[4680]: E0126 17:22:29.307913 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa\": container with ID starting with b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa not found: ID does not exist" containerID="b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.307973 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa"} err="failed to get container status \"b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa\": rpc error: code = NotFound desc = could not find container \"b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa\": container with ID starting with b9391311ba7ee6eb507e4a2a1cc43c2c613814e7e35739d0efb501619007d8aa not found: ID does not exist" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.307999 4680 scope.go:117] "RemoveContainer" containerID="3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f" Jan 26 17:22:29 crc kubenswrapper[4680]: E0126 17:22:29.308367 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f\": container with ID starting with 3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f not found: ID does not exist" containerID="3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f" Jan 26 17:22:29 crc kubenswrapper[4680]: I0126 17:22:29.308402 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f"} err="failed to get container status \"3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f\": rpc error: code = NotFound desc = could not find container \"3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f\": container with ID starting with 3114cb01fd6bd63017687a3c572a9be0f94047fffc398c5ac26ea47deb83aa4f not found: ID does not exist" Jan 26 17:22:31 crc kubenswrapper[4680]: I0126 17:22:31.179731 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" path="/var/lib/kubelet/pods/f9d063b7-6cc4-429b-81c4-59a6e2bef098/volumes" Jan 26 17:22:43 crc kubenswrapper[4680]: I0126 17:22:43.169306 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:22:43 crc kubenswrapper[4680]: E0126 17:22:43.170015 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.412083 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5cj4s"] Jan 26 17:22:50 crc kubenswrapper[4680]: E0126 17:22:50.413398 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="extract-utilities" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.413417 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="extract-utilities" Jan 26 17:22:50 crc kubenswrapper[4680]: E0126 17:22:50.413444 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="extract-content" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.413454 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="extract-content" Jan 26 17:22:50 crc kubenswrapper[4680]: E0126 17:22:50.413497 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="registry-server" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.413505 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="registry-server" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.413724 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d063b7-6cc4-429b-81c4-59a6e2bef098" containerName="registry-server" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.416283 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.431734 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cj4s"] Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.506605 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njp4z\" (UniqueName: \"kubernetes.io/projected/c915ad36-c36d-4259-8722-076a2167ced9-kube-api-access-njp4z\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.506902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-utilities\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.507421 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-catalog-content\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.609165 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-catalog-content\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.609259 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njp4z\" (UniqueName: \"kubernetes.io/projected/c915ad36-c36d-4259-8722-076a2167ced9-kube-api-access-njp4z\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.609330 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-utilities\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.609755 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-catalog-content\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.609850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-utilities\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.630022 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njp4z\" (UniqueName: \"kubernetes.io/projected/c915ad36-c36d-4259-8722-076a2167ced9-kube-api-access-njp4z\") pod \"redhat-operators-5cj4s\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:50 crc kubenswrapper[4680]: I0126 17:22:50.746230 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:22:51 crc kubenswrapper[4680]: I0126 17:22:51.249155 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cj4s"] Jan 26 17:22:52 crc kubenswrapper[4680]: I0126 17:22:52.412958 4680 generic.go:334] "Generic (PLEG): container finished" podID="c915ad36-c36d-4259-8722-076a2167ced9" containerID="3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68" exitCode=0 Jan 26 17:22:52 crc kubenswrapper[4680]: I0126 17:22:52.413093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerDied","Data":"3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68"} Jan 26 17:22:52 crc kubenswrapper[4680]: I0126 17:22:52.413242 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerStarted","Data":"2dfeef1c2d0573c86615f2ab81761ae5f007c360a30eb76d44e650ca6d8ee940"} Jan 26 17:22:53 crc kubenswrapper[4680]: I0126 17:22:53.425366 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerStarted","Data":"51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8"} Jan 26 17:22:56 crc kubenswrapper[4680]: I0126 17:22:56.169775 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:22:56 crc kubenswrapper[4680]: E0126 17:22:56.170603 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:22:57 crc kubenswrapper[4680]: I0126 17:22:57.533933 4680 generic.go:334] "Generic (PLEG): container finished" podID="c915ad36-c36d-4259-8722-076a2167ced9" containerID="51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8" exitCode=0 Jan 26 17:22:57 crc kubenswrapper[4680]: I0126 17:22:57.534003 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerDied","Data":"51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8"} Jan 26 17:22:58 crc kubenswrapper[4680]: I0126 17:22:58.546563 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerStarted","Data":"7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec"} Jan 26 17:23:00 crc kubenswrapper[4680]: I0126 17:23:00.746663 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:23:00 crc kubenswrapper[4680]: I0126 17:23:00.747016 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:23:01 crc kubenswrapper[4680]: I0126 17:23:01.797430 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5cj4s" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="registry-server" probeResult="failure" output=< Jan 26 17:23:01 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:23:01 crc kubenswrapper[4680]: > Jan 26 17:23:11 crc kubenswrapper[4680]: I0126 17:23:11.178509 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:23:11 crc kubenswrapper[4680]: E0126 17:23:11.181463 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:23:11 crc kubenswrapper[4680]: I0126 17:23:11.801714 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5cj4s" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="registry-server" probeResult="failure" output=< Jan 26 17:23:11 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:23:11 crc kubenswrapper[4680]: > Jan 26 17:23:20 crc kubenswrapper[4680]: I0126 17:23:20.794749 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:23:20 crc kubenswrapper[4680]: I0126 17:23:20.827628 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5cj4s" podStartSLOduration=25.066062766 podStartE2EDuration="30.827609214s" podCreationTimestamp="2026-01-26 17:22:50 +0000 UTC" firstStartedPulling="2026-01-26 17:22:52.415670351 +0000 UTC m=+4647.576942620" lastFinishedPulling="2026-01-26 17:22:58.177216799 +0000 UTC m=+4653.338489068" observedRunningTime="2026-01-26 17:22:58.577800518 +0000 UTC m=+4653.739072787" watchObservedRunningTime="2026-01-26 17:23:20.827609214 +0000 UTC m=+4675.988881483" Jan 26 17:23:20 crc kubenswrapper[4680]: I0126 17:23:20.858342 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:23:21 crc kubenswrapper[4680]: I0126 17:23:21.615564 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cj4s"] Jan 26 17:23:22 crc kubenswrapper[4680]: I0126 17:23:22.794784 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5cj4s" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="registry-server" containerID="cri-o://7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec" gracePeriod=2 Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.172894 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:23:23 crc kubenswrapper[4680]: E0126 17:23:23.173687 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.315954 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.394559 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-utilities\") pod \"c915ad36-c36d-4259-8722-076a2167ced9\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.394624 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njp4z\" (UniqueName: \"kubernetes.io/projected/c915ad36-c36d-4259-8722-076a2167ced9-kube-api-access-njp4z\") pod \"c915ad36-c36d-4259-8722-076a2167ced9\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.394779 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-catalog-content\") pod \"c915ad36-c36d-4259-8722-076a2167ced9\" (UID: \"c915ad36-c36d-4259-8722-076a2167ced9\") " Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.395554 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-utilities" (OuterVolumeSpecName: "utilities") pod "c915ad36-c36d-4259-8722-076a2167ced9" (UID: "c915ad36-c36d-4259-8722-076a2167ced9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.414911 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c915ad36-c36d-4259-8722-076a2167ced9-kube-api-access-njp4z" (OuterVolumeSpecName: "kube-api-access-njp4z") pod "c915ad36-c36d-4259-8722-076a2167ced9" (UID: "c915ad36-c36d-4259-8722-076a2167ced9"). InnerVolumeSpecName "kube-api-access-njp4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.497518 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.497568 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njp4z\" (UniqueName: \"kubernetes.io/projected/c915ad36-c36d-4259-8722-076a2167ced9-kube-api-access-njp4z\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.514755 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c915ad36-c36d-4259-8722-076a2167ced9" (UID: "c915ad36-c36d-4259-8722-076a2167ced9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.599003 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c915ad36-c36d-4259-8722-076a2167ced9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.808967 4680 generic.go:334] "Generic (PLEG): container finished" podID="c915ad36-c36d-4259-8722-076a2167ced9" containerID="7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec" exitCode=0 Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.809017 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerDied","Data":"7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec"} Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.809040 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cj4s" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.810188 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cj4s" event={"ID":"c915ad36-c36d-4259-8722-076a2167ced9","Type":"ContainerDied","Data":"2dfeef1c2d0573c86615f2ab81761ae5f007c360a30eb76d44e650ca6d8ee940"} Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.810217 4680 scope.go:117] "RemoveContainer" containerID="7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.863205 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cj4s"] Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.878246 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5cj4s"] Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.878254 4680 scope.go:117] "RemoveContainer" containerID="51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.957562 4680 scope.go:117] "RemoveContainer" containerID="3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.976494 4680 scope.go:117] "RemoveContainer" containerID="7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec" Jan 26 17:23:23 crc kubenswrapper[4680]: E0126 17:23:23.977284 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec\": container with ID starting with 7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec not found: ID does not exist" containerID="7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.977325 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec"} err="failed to get container status \"7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec\": rpc error: code = NotFound desc = could not find container \"7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec\": container with ID starting with 7d0fa684d28cb47f709b24a988003a5648c98c556e755fae50e722be2f503eec not found: ID does not exist" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.977352 4680 scope.go:117] "RemoveContainer" containerID="51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8" Jan 26 17:23:23 crc kubenswrapper[4680]: E0126 17:23:23.977655 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8\": container with ID starting with 51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8 not found: ID does not exist" containerID="51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.977687 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8"} err="failed to get container status \"51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8\": rpc error: code = NotFound desc = could not find container \"51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8\": container with ID starting with 51b1bedf720e2b2a71c4c9447d4d9e43b15e916e3b3967ab8f72681b12d9a0f8 not found: ID does not exist" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.977712 4680 scope.go:117] "RemoveContainer" containerID="3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68" Jan 26 17:23:23 crc kubenswrapper[4680]: E0126 17:23:23.977930 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68\": container with ID starting with 3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68 not found: ID does not exist" containerID="3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68" Jan 26 17:23:23 crc kubenswrapper[4680]: I0126 17:23:23.977954 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68"} err="failed to get container status \"3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68\": rpc error: code = NotFound desc = could not find container \"3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68\": container with ID starting with 3f315fb366f3769c5fb05b3bbd5fb92a8fa42a5c2d312861854c83d5d4e73f68 not found: ID does not exist" Jan 26 17:23:25 crc kubenswrapper[4680]: I0126 17:23:25.183879 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c915ad36-c36d-4259-8722-076a2167ced9" path="/var/lib/kubelet/pods/c915ad36-c36d-4259-8722-076a2167ced9/volumes" Jan 26 17:23:36 crc kubenswrapper[4680]: I0126 17:23:36.169683 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:23:36 crc kubenswrapper[4680]: E0126 17:23:36.170443 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:23:50 crc kubenswrapper[4680]: I0126 17:23:50.169916 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:23:50 crc kubenswrapper[4680]: E0126 17:23:50.170894 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:24:04 crc kubenswrapper[4680]: I0126 17:24:04.169557 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:24:04 crc kubenswrapper[4680]: E0126 17:24:04.170369 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:24:15 crc kubenswrapper[4680]: I0126 17:24:15.176679 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:24:15 crc kubenswrapper[4680]: E0126 17:24:15.178551 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:24:29 crc kubenswrapper[4680]: I0126 17:24:29.170529 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:24:29 crc kubenswrapper[4680]: E0126 17:24:29.171844 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:24:41 crc kubenswrapper[4680]: I0126 17:24:41.170142 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:24:41 crc kubenswrapper[4680]: E0126 17:24:41.171901 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:24:53 crc kubenswrapper[4680]: I0126 17:24:53.169365 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:24:53 crc kubenswrapper[4680]: E0126 17:24:53.170149 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:25:05 crc kubenswrapper[4680]: I0126 17:25:05.179775 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:25:05 crc kubenswrapper[4680]: E0126 17:25:05.181049 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:25:20 crc kubenswrapper[4680]: I0126 17:25:20.169779 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:25:20 crc kubenswrapper[4680]: E0126 17:25:20.170674 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:25:32 crc kubenswrapper[4680]: I0126 17:25:32.170268 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:25:32 crc kubenswrapper[4680]: E0126 17:25:32.171521 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:25:46 crc kubenswrapper[4680]: I0126 17:25:46.169514 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:25:46 crc kubenswrapper[4680]: E0126 17:25:46.170298 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:25:59 crc kubenswrapper[4680]: I0126 17:25:59.170322 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:25:59 crc kubenswrapper[4680]: E0126 17:25:59.171279 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:26:11 crc kubenswrapper[4680]: I0126 17:26:11.169562 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:26:11 crc kubenswrapper[4680]: E0126 17:26:11.171312 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:26:22 crc kubenswrapper[4680]: I0126 17:26:22.169634 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:26:22 crc kubenswrapper[4680]: I0126 17:26:22.442659 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"5c935b30438c8d6e28f1d9b3e81af101a9d96c346b5d9e800b039188963f83d2"} Jan 26 17:26:46 crc kubenswrapper[4680]: I0126 17:26:46.937831 4680 scope.go:117] "RemoveContainer" containerID="3ce1716896a0e466ed92ba4f720cba8f6b122c13f78857973db5fdba71dcb66e" Jan 26 17:27:46 crc kubenswrapper[4680]: I0126 17:27:46.984420 4680 scope.go:117] "RemoveContainer" containerID="912ba652b7be3b2c36bcb8a44e9b5837b96f95ed72a41f85a6d278136ade10ba" Jan 26 17:27:47 crc kubenswrapper[4680]: I0126 17:27:47.012542 4680 scope.go:117] "RemoveContainer" containerID="bd3d85b4ada473bf3a8ce6772b3ac5a836dba30954ddd11362fc956fc7240c8f" Jan 26 17:28:46 crc kubenswrapper[4680]: I0126 17:28:46.980590 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:28:46 crc kubenswrapper[4680]: I0126 17:28:46.981148 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:29:16 crc kubenswrapper[4680]: I0126 17:29:16.980983 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:29:16 crc kubenswrapper[4680]: I0126 17:29:16.981532 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:29:46 crc kubenswrapper[4680]: I0126 17:29:46.980826 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:29:46 crc kubenswrapper[4680]: I0126 17:29:46.981438 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:29:46 crc kubenswrapper[4680]: I0126 17:29:46.981491 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:29:46 crc kubenswrapper[4680]: I0126 17:29:46.982351 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c935b30438c8d6e28f1d9b3e81af101a9d96c346b5d9e800b039188963f83d2"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:29:46 crc kubenswrapper[4680]: I0126 17:29:46.982402 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://5c935b30438c8d6e28f1d9b3e81af101a9d96c346b5d9e800b039188963f83d2" gracePeriod=600 Jan 26 17:29:47 crc kubenswrapper[4680]: I0126 17:29:47.281164 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="5c935b30438c8d6e28f1d9b3e81af101a9d96c346b5d9e800b039188963f83d2" exitCode=0 Jan 26 17:29:47 crc kubenswrapper[4680]: I0126 17:29:47.281266 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"5c935b30438c8d6e28f1d9b3e81af101a9d96c346b5d9e800b039188963f83d2"} Jan 26 17:29:47 crc kubenswrapper[4680]: I0126 17:29:47.281463 4680 scope.go:117] "RemoveContainer" containerID="ae728037b23e1d4990339807819a17d71a3a3f5fe9368367d9d89813394a9679" Jan 26 17:29:48 crc kubenswrapper[4680]: I0126 17:29:48.295499 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc"} Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.225930 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr"] Jan 26 17:30:00 crc kubenswrapper[4680]: E0126 17:30:00.227574 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="extract-content" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.227618 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="extract-content" Jan 26 17:30:00 crc kubenswrapper[4680]: E0126 17:30:00.227651 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="extract-utilities" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.227661 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="extract-utilities" Jan 26 17:30:00 crc kubenswrapper[4680]: E0126 17:30:00.227689 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.227699 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.231773 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c915ad36-c36d-4259-8722-076a2167ced9" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.233634 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.242709 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.243128 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.244677 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr"] Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.378490 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-secret-volume\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.378568 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-config-volume\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.378641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-kube-api-access-qpjcz\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.480222 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-secret-volume\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.480286 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-config-volume\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.480350 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-kube-api-access-qpjcz\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.481276 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-config-volume\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.622795 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-secret-volume\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.628430 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-kube-api-access-qpjcz\") pod \"collect-profiles-29490810-ngbpr\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:00 crc kubenswrapper[4680]: I0126 17:30:00.875980 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:01 crc kubenswrapper[4680]: I0126 17:30:01.590025 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr"] Jan 26 17:30:02 crc kubenswrapper[4680]: I0126 17:30:02.435785 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" event={"ID":"e0ca6cc9-f7a4-47b0-953f-508d6665fff6","Type":"ContainerStarted","Data":"868f589dee77fcc109fc87e5d8a1806debb0702f42d795d1e45f37cfc4647fd1"} Jan 26 17:30:02 crc kubenswrapper[4680]: I0126 17:30:02.436141 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" event={"ID":"e0ca6cc9-f7a4-47b0-953f-508d6665fff6","Type":"ContainerStarted","Data":"eab3f1f5c79154bc8293236ce1202264563731f1cdcef44e803a7194b2b91c31"} Jan 26 17:30:02 crc kubenswrapper[4680]: I0126 17:30:02.468047 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" podStartSLOduration=2.468023896 podStartE2EDuration="2.468023896s" podCreationTimestamp="2026-01-26 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:30:02.461862091 +0000 UTC m=+5077.623134380" watchObservedRunningTime="2026-01-26 17:30:02.468023896 +0000 UTC m=+5077.629296155" Jan 26 17:30:03 crc kubenswrapper[4680]: I0126 17:30:03.444078 4680 generic.go:334] "Generic (PLEG): container finished" podID="e0ca6cc9-f7a4-47b0-953f-508d6665fff6" containerID="868f589dee77fcc109fc87e5d8a1806debb0702f42d795d1e45f37cfc4647fd1" exitCode=0 Jan 26 17:30:03 crc kubenswrapper[4680]: I0126 17:30:03.444381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" event={"ID":"e0ca6cc9-f7a4-47b0-953f-508d6665fff6","Type":"ContainerDied","Data":"868f589dee77fcc109fc87e5d8a1806debb0702f42d795d1e45f37cfc4647fd1"} Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.850949 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.940798 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-kube-api-access-qpjcz\") pod \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.941191 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-secret-volume\") pod \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.941237 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-config-volume\") pod \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\" (UID: \"e0ca6cc9-f7a4-47b0-953f-508d6665fff6\") " Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.942505 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0ca6cc9-f7a4-47b0-953f-508d6665fff6" (UID: "e0ca6cc9-f7a4-47b0-953f-508d6665fff6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.948174 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-kube-api-access-qpjcz" (OuterVolumeSpecName: "kube-api-access-qpjcz") pod "e0ca6cc9-f7a4-47b0-953f-508d6665fff6" (UID: "e0ca6cc9-f7a4-47b0-953f-508d6665fff6"). InnerVolumeSpecName "kube-api-access-qpjcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:30:04 crc kubenswrapper[4680]: I0126 17:30:04.959265 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0ca6cc9-f7a4-47b0-953f-508d6665fff6" (UID: "e0ca6cc9-f7a4-47b0-953f-508d6665fff6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.043182 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-kube-api-access-qpjcz\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.043214 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.043224 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ca6cc9-f7a4-47b0-953f-508d6665fff6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.461877 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" event={"ID":"e0ca6cc9-f7a4-47b0-953f-508d6665fff6","Type":"ContainerDied","Data":"eab3f1f5c79154bc8293236ce1202264563731f1cdcef44e803a7194b2b91c31"} Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.461925 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab3f1f5c79154bc8293236ce1202264563731f1cdcef44e803a7194b2b91c31" Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.461989 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr" Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.929413 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9"] Jan 26 17:30:05 crc kubenswrapper[4680]: I0126 17:30:05.937757 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-8ztv9"] Jan 26 17:30:07 crc kubenswrapper[4680]: I0126 17:30:07.419415 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb34efa-8f64-441f-a978-ce7ff6b1f8d8" path="/var/lib/kubelet/pods/acb34efa-8f64-441f-a978-ce7ff6b1f8d8/volumes" Jan 26 17:30:47 crc kubenswrapper[4680]: I0126 17:30:47.111933 4680 scope.go:117] "RemoveContainer" containerID="d474e2f6a8c38d0a3f949002b11f1f72fbef4b6a9b7f7fa9fda7dead4f65eb3e" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.071579 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdqb6"] Jan 26 17:31:43 crc kubenswrapper[4680]: E0126 17:31:43.072598 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ca6cc9-f7a4-47b0-953f-508d6665fff6" containerName="collect-profiles" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.072619 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ca6cc9-f7a4-47b0-953f-508d6665fff6" containerName="collect-profiles" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.072872 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ca6cc9-f7a4-47b0-953f-508d6665fff6" containerName="collect-profiles" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.078828 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.084188 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdqb6"] Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.177837 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-catalog-content\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.177982 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jb5j\" (UniqueName: \"kubernetes.io/projected/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-kube-api-access-2jb5j\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.178029 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-utilities\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.279912 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-utilities\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.280045 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-catalog-content\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.280221 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jb5j\" (UniqueName: \"kubernetes.io/projected/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-kube-api-access-2jb5j\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.280791 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-utilities\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.281146 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-catalog-content\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.438265 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jb5j\" (UniqueName: \"kubernetes.io/projected/b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0-kube-api-access-2jb5j\") pod \"certified-operators-fdqb6\" (UID: \"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0\") " pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:43 crc kubenswrapper[4680]: I0126 17:31:43.716878 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:31:44 crc kubenswrapper[4680]: I0126 17:31:44.292180 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdqb6"] Jan 26 17:31:44 crc kubenswrapper[4680]: I0126 17:31:44.335866 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdqb6" event={"ID":"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0","Type":"ContainerStarted","Data":"ad6d91686b0096799683a0ebc4c22bc258081979fe3dfca713aa8d15cd363595"} Jan 26 17:31:45 crc kubenswrapper[4680]: I0126 17:31:45.353009 4680 generic.go:334] "Generic (PLEG): container finished" podID="b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0" containerID="d67ffa1d185aac449758cf5977008124ff4046aa2ee2bcaea85e4e53a1c1976d" exitCode=0 Jan 26 17:31:45 crc kubenswrapper[4680]: I0126 17:31:45.353066 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdqb6" event={"ID":"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0","Type":"ContainerDied","Data":"d67ffa1d185aac449758cf5977008124ff4046aa2ee2bcaea85e4e53a1c1976d"} Jan 26 17:31:45 crc kubenswrapper[4680]: I0126 17:31:45.489046 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:31:51 crc kubenswrapper[4680]: I0126 17:31:51.478228 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fppvg" podUID="6fcb2787-4ea2-498d-9d2b-92577f4e0640" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:31:57 crc kubenswrapper[4680]: I0126 17:31:57.483101 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdqb6" event={"ID":"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0","Type":"ContainerStarted","Data":"19bd449c9429d22f23d44133081c176ae2c9510702b941ab3c1202a1156959ec"} Jan 26 17:31:58 crc kubenswrapper[4680]: I0126 17:31:58.491981 4680 generic.go:334] "Generic (PLEG): container finished" podID="b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0" containerID="19bd449c9429d22f23d44133081c176ae2c9510702b941ab3c1202a1156959ec" exitCode=0 Jan 26 17:31:58 crc kubenswrapper[4680]: I0126 17:31:58.492059 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdqb6" event={"ID":"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0","Type":"ContainerDied","Data":"19bd449c9429d22f23d44133081c176ae2c9510702b941ab3c1202a1156959ec"} Jan 26 17:32:01 crc kubenswrapper[4680]: I0126 17:32:01.675705 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdqb6" event={"ID":"b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0","Type":"ContainerStarted","Data":"d44de1450cbd830575274138b401085b4ddd6770a95686422201f5b2b7c45327"} Jan 26 17:32:01 crc kubenswrapper[4680]: I0126 17:32:01.695868 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdqb6" podStartSLOduration=3.647232099 podStartE2EDuration="18.695852441s" podCreationTimestamp="2026-01-26 17:31:43 +0000 UTC" firstStartedPulling="2026-01-26 17:31:45.355294331 +0000 UTC m=+5180.516566600" lastFinishedPulling="2026-01-26 17:32:00.403914663 +0000 UTC m=+5195.565186942" observedRunningTime="2026-01-26 17:32:01.692636549 +0000 UTC m=+5196.853908818" watchObservedRunningTime="2026-01-26 17:32:01.695852441 +0000 UTC m=+5196.857124710" Jan 26 17:32:03 crc kubenswrapper[4680]: I0126 17:32:03.717925 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:32:03 crc kubenswrapper[4680]: I0126 17:32:03.718317 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:32:04 crc kubenswrapper[4680]: I0126 17:32:04.768894 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdqb6" podUID="b6f85cc4-65f3-4f93-acda-ee4b4ddeecb0" containerName="registry-server" probeResult="failure" output=< Jan 26 17:32:04 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:32:04 crc kubenswrapper[4680]: > Jan 26 17:32:13 crc kubenswrapper[4680]: I0126 17:32:13.769235 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:32:13 crc kubenswrapper[4680]: I0126 17:32:13.830874 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdqb6" Jan 26 17:32:14 crc kubenswrapper[4680]: I0126 17:32:14.094565 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdqb6"] Jan 26 17:32:14 crc kubenswrapper[4680]: I0126 17:32:14.280290 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cs6qq"] Jan 26 17:32:14 crc kubenswrapper[4680]: I0126 17:32:14.281310 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cs6qq" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" containerID="cri-o://af343a459dd6142a2fdf19814a189026ab0adc17b15060f13c3f6ec659770ed4" gracePeriod=2 Jan 26 17:32:14 crc kubenswrapper[4680]: I0126 17:32:14.787306 4680 generic.go:334] "Generic (PLEG): container finished" podID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerID="af343a459dd6142a2fdf19814a189026ab0adc17b15060f13c3f6ec659770ed4" exitCode=0 Jan 26 17:32:14 crc kubenswrapper[4680]: I0126 17:32:14.787392 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs6qq" event={"ID":"b400e8fe-6116-4179-8aa0-7e697c9671bd","Type":"ContainerDied","Data":"af343a459dd6142a2fdf19814a189026ab0adc17b15060f13c3f6ec659770ed4"} Jan 26 17:32:15 crc kubenswrapper[4680]: I0126 17:32:15.962798 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.074773 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-utilities" (OuterVolumeSpecName: "utilities") pod "b400e8fe-6116-4179-8aa0-7e697c9671bd" (UID: "b400e8fe-6116-4179-8aa0-7e697c9671bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.074837 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-utilities\") pod \"b400e8fe-6116-4179-8aa0-7e697c9671bd\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.074927 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64c6v\" (UniqueName: \"kubernetes.io/projected/b400e8fe-6116-4179-8aa0-7e697c9671bd-kube-api-access-64c6v\") pod \"b400e8fe-6116-4179-8aa0-7e697c9671bd\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.075050 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-catalog-content\") pod \"b400e8fe-6116-4179-8aa0-7e697c9671bd\" (UID: \"b400e8fe-6116-4179-8aa0-7e697c9671bd\") " Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.076944 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.082646 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b400e8fe-6116-4179-8aa0-7e697c9671bd-kube-api-access-64c6v" (OuterVolumeSpecName: "kube-api-access-64c6v") pod "b400e8fe-6116-4179-8aa0-7e697c9671bd" (UID: "b400e8fe-6116-4179-8aa0-7e697c9671bd"). InnerVolumeSpecName "kube-api-access-64c6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.156725 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b400e8fe-6116-4179-8aa0-7e697c9671bd" (UID: "b400e8fe-6116-4179-8aa0-7e697c9671bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.178957 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64c6v\" (UniqueName: \"kubernetes.io/projected/b400e8fe-6116-4179-8aa0-7e697c9671bd-kube-api-access-64c6v\") on node \"crc\" DevicePath \"\"" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.178993 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b400e8fe-6116-4179-8aa0-7e697c9671bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.805617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs6qq" event={"ID":"b400e8fe-6116-4179-8aa0-7e697c9671bd","Type":"ContainerDied","Data":"9e468e0965cacb9c2b8916d1ab2ac225a92c82bbb1083a69fa3b8d899cd07dec"} Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.805701 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs6qq" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.805904 4680 scope.go:117] "RemoveContainer" containerID="af343a459dd6142a2fdf19814a189026ab0adc17b15060f13c3f6ec659770ed4" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.834818 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cs6qq"] Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.843328 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cs6qq"] Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.848595 4680 scope.go:117] "RemoveContainer" containerID="b7ae92a0d5339df2eb2eaa004492b583b4d7018cee07ad2cceb34cd3fa54fca4" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.907539 4680 scope.go:117] "RemoveContainer" containerID="fb44feba061ab8e0da977f89b504e2aa2044042386b8b6a553ae877a22b4f774" Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.981356 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:32:16 crc kubenswrapper[4680]: I0126 17:32:16.981406 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:32:17 crc kubenswrapper[4680]: I0126 17:32:17.183716 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" path="/var/lib/kubelet/pods/b400e8fe-6116-4179-8aa0-7e697c9671bd/volumes" Jan 26 17:32:46 crc kubenswrapper[4680]: I0126 17:32:46.980443 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:32:46 crc kubenswrapper[4680]: I0126 17:32:46.980986 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:33:16 crc kubenswrapper[4680]: I0126 17:33:16.981856 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:33:16 crc kubenswrapper[4680]: I0126 17:33:16.983231 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:33:16 crc kubenswrapper[4680]: I0126 17:33:16.983346 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:33:16 crc kubenswrapper[4680]: I0126 17:33:16.985014 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:33:16 crc kubenswrapper[4680]: I0126 17:33:16.985239 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" gracePeriod=600 Jan 26 17:33:17 crc kubenswrapper[4680]: E0126 17:33:17.110110 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:33:17 crc kubenswrapper[4680]: I0126 17:33:17.431777 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" exitCode=0 Jan 26 17:33:17 crc kubenswrapper[4680]: I0126 17:33:17.431846 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc"} Jan 26 17:33:17 crc kubenswrapper[4680]: I0126 17:33:17.431901 4680 scope.go:117] "RemoveContainer" containerID="5c935b30438c8d6e28f1d9b3e81af101a9d96c346b5d9e800b039188963f83d2" Jan 26 17:33:17 crc kubenswrapper[4680]: I0126 17:33:17.432944 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:33:17 crc kubenswrapper[4680]: E0126 17:33:17.433421 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.092541 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgj2l"] Jan 26 17:33:18 crc kubenswrapper[4680]: E0126 17:33:18.093249 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="extract-content" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.093262 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="extract-content" Jan 26 17:33:18 crc kubenswrapper[4680]: E0126 17:33:18.093287 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.093292 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" Jan 26 17:33:18 crc kubenswrapper[4680]: E0126 17:33:18.093319 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="extract-utilities" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.093325 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="extract-utilities" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.093509 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b400e8fe-6116-4179-8aa0-7e697c9671bd" containerName="registry-server" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.094851 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.122667 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-catalog-content\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.122714 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzfqm\" (UniqueName: \"kubernetes.io/projected/db14ad60-90dd-4c02-864a-11b42d5e440e-kube-api-access-vzfqm\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.122804 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-utilities\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.127782 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgj2l"] Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.227781 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-catalog-content\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.227828 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzfqm\" (UniqueName: \"kubernetes.io/projected/db14ad60-90dd-4c02-864a-11b42d5e440e-kube-api-access-vzfqm\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.227914 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-utilities\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.228953 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-utilities\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.229008 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-catalog-content\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.248874 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzfqm\" (UniqueName: \"kubernetes.io/projected/db14ad60-90dd-4c02-864a-11b42d5e440e-kube-api-access-vzfqm\") pod \"community-operators-bgj2l\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:18 crc kubenswrapper[4680]: I0126 17:33:18.412734 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:19 crc kubenswrapper[4680]: I0126 17:33:19.024524 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgj2l"] Jan 26 17:33:19 crc kubenswrapper[4680]: I0126 17:33:19.473872 4680 generic.go:334] "Generic (PLEG): container finished" podID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerID="85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c" exitCode=0 Jan 26 17:33:19 crc kubenswrapper[4680]: I0126 17:33:19.474171 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerDied","Data":"85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c"} Jan 26 17:33:19 crc kubenswrapper[4680]: I0126 17:33:19.474207 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerStarted","Data":"1d598bed0d992f6f2bd9aad22e25ac1bf1e1cfeea0fc45e351fa0d607bcebb5e"} Jan 26 17:33:20 crc kubenswrapper[4680]: I0126 17:33:20.486483 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerStarted","Data":"e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c"} Jan 26 17:33:22 crc kubenswrapper[4680]: I0126 17:33:22.508296 4680 generic.go:334] "Generic (PLEG): container finished" podID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerID="e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c" exitCode=0 Jan 26 17:33:22 crc kubenswrapper[4680]: I0126 17:33:22.508783 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerDied","Data":"e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c"} Jan 26 17:33:23 crc kubenswrapper[4680]: I0126 17:33:23.520285 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerStarted","Data":"f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753"} Jan 26 17:33:23 crc kubenswrapper[4680]: I0126 17:33:23.553383 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgj2l" podStartSLOduration=2.016774253 podStartE2EDuration="5.553352292s" podCreationTimestamp="2026-01-26 17:33:18 +0000 UTC" firstStartedPulling="2026-01-26 17:33:19.476456714 +0000 UTC m=+5274.637728983" lastFinishedPulling="2026-01-26 17:33:23.013034753 +0000 UTC m=+5278.174307022" observedRunningTime="2026-01-26 17:33:23.541453014 +0000 UTC m=+5278.702725283" watchObservedRunningTime="2026-01-26 17:33:23.553352292 +0000 UTC m=+5278.714624561" Jan 26 17:33:28 crc kubenswrapper[4680]: I0126 17:33:28.414670 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:28 crc kubenswrapper[4680]: I0126 17:33:28.415207 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:28 crc kubenswrapper[4680]: I0126 17:33:28.461829 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:28 crc kubenswrapper[4680]: I0126 17:33:28.603764 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:28 crc kubenswrapper[4680]: I0126 17:33:28.695118 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgj2l"] Jan 26 17:33:30 crc kubenswrapper[4680]: I0126 17:33:30.575518 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgj2l" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="registry-server" containerID="cri-o://f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753" gracePeriod=2 Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.060195 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.170548 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:33:31 crc kubenswrapper[4680]: E0126 17:33:31.171012 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.206170 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzfqm\" (UniqueName: \"kubernetes.io/projected/db14ad60-90dd-4c02-864a-11b42d5e440e-kube-api-access-vzfqm\") pod \"db14ad60-90dd-4c02-864a-11b42d5e440e\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.206345 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-catalog-content\") pod \"db14ad60-90dd-4c02-864a-11b42d5e440e\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.206439 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-utilities\") pod \"db14ad60-90dd-4c02-864a-11b42d5e440e\" (UID: \"db14ad60-90dd-4c02-864a-11b42d5e440e\") " Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.211595 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-utilities" (OuterVolumeSpecName: "utilities") pod "db14ad60-90dd-4c02-864a-11b42d5e440e" (UID: "db14ad60-90dd-4c02-864a-11b42d5e440e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.227536 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db14ad60-90dd-4c02-864a-11b42d5e440e-kube-api-access-vzfqm" (OuterVolumeSpecName: "kube-api-access-vzfqm") pod "db14ad60-90dd-4c02-864a-11b42d5e440e" (UID: "db14ad60-90dd-4c02-864a-11b42d5e440e"). InnerVolumeSpecName "kube-api-access-vzfqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.273645 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db14ad60-90dd-4c02-864a-11b42d5e440e" (UID: "db14ad60-90dd-4c02-864a-11b42d5e440e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.308928 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.308967 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzfqm\" (UniqueName: \"kubernetes.io/projected/db14ad60-90dd-4c02-864a-11b42d5e440e-kube-api-access-vzfqm\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.308979 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db14ad60-90dd-4c02-864a-11b42d5e440e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.585789 4680 generic.go:334] "Generic (PLEG): container finished" podID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerID="f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753" exitCode=0 Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.585830 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerDied","Data":"f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753"} Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.585854 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgj2l" event={"ID":"db14ad60-90dd-4c02-864a-11b42d5e440e","Type":"ContainerDied","Data":"1d598bed0d992f6f2bd9aad22e25ac1bf1e1cfeea0fc45e351fa0d607bcebb5e"} Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.585870 4680 scope.go:117] "RemoveContainer" containerID="f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.585987 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgj2l" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.624903 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgj2l"] Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.628797 4680 scope.go:117] "RemoveContainer" containerID="e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c" Jan 26 17:33:31 crc kubenswrapper[4680]: I0126 17:33:31.646909 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgj2l"] Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.106671 4680 scope.go:117] "RemoveContainer" containerID="85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c" Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.132916 4680 scope.go:117] "RemoveContainer" containerID="f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753" Jan 26 17:33:32 crc kubenswrapper[4680]: E0126 17:33:32.133390 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753\": container with ID starting with f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753 not found: ID does not exist" containerID="f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753" Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.133429 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753"} err="failed to get container status \"f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753\": rpc error: code = NotFound desc = could not find container \"f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753\": container with ID starting with f887110e57cfe2e29fc707d966b3426fbb0c6718e8605cb37eca148d5e8ba753 not found: ID does not exist" Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.133456 4680 scope.go:117] "RemoveContainer" containerID="e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c" Jan 26 17:33:32 crc kubenswrapper[4680]: E0126 17:33:32.133920 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c\": container with ID starting with e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c not found: ID does not exist" containerID="e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c" Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.133951 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c"} err="failed to get container status \"e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c\": rpc error: code = NotFound desc = could not find container \"e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c\": container with ID starting with e42fe5a5b1b7457c0978f0814d35a2600f9381cd58ea7baa131800c482ef7d3c not found: ID does not exist" Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.133969 4680 scope.go:117] "RemoveContainer" containerID="85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c" Jan 26 17:33:32 crc kubenswrapper[4680]: E0126 17:33:32.134429 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c\": container with ID starting with 85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c not found: ID does not exist" containerID="85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c" Jan 26 17:33:32 crc kubenswrapper[4680]: I0126 17:33:32.134471 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c"} err="failed to get container status \"85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c\": rpc error: code = NotFound desc = could not find container \"85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c\": container with ID starting with 85bbda4edc079635c2797858eed018203883bba05c84e6b1deff060214f8459c not found: ID does not exist" Jan 26 17:33:33 crc kubenswrapper[4680]: I0126 17:33:33.180668 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" path="/var/lib/kubelet/pods/db14ad60-90dd-4c02-864a-11b42d5e440e/volumes" Jan 26 17:33:45 crc kubenswrapper[4680]: I0126 17:33:45.175778 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:33:45 crc kubenswrapper[4680]: E0126 17:33:45.176865 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:33:59 crc kubenswrapper[4680]: I0126 17:33:59.170644 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:33:59 crc kubenswrapper[4680]: E0126 17:33:59.171467 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:34:10 crc kubenswrapper[4680]: I0126 17:34:10.169305 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:34:10 crc kubenswrapper[4680]: E0126 17:34:10.170181 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:34:25 crc kubenswrapper[4680]: I0126 17:34:25.176219 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:34:25 crc kubenswrapper[4680]: E0126 17:34:25.176942 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.910682 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8r4vb"] Jan 26 17:34:27 crc kubenswrapper[4680]: E0126 17:34:27.911493 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="extract-utilities" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.911512 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="extract-utilities" Jan 26 17:34:27 crc kubenswrapper[4680]: E0126 17:34:27.911541 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="extract-content" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.911549 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="extract-content" Jan 26 17:34:27 crc kubenswrapper[4680]: E0126 17:34:27.911568 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="registry-server" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.911575 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="registry-server" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.911816 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="db14ad60-90dd-4c02-864a-11b42d5e440e" containerName="registry-server" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.913905 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.938535 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8r4vb"] Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.993976 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxdcp\" (UniqueName: \"kubernetes.io/projected/e114f31f-e20e-454f-a6fc-f869f86e58d0-kube-api-access-pxdcp\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.994182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-catalog-content\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:27 crc kubenswrapper[4680]: I0126 17:34:27.994205 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-utilities\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.095669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-catalog-content\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.095719 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-utilities\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.095828 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxdcp\" (UniqueName: \"kubernetes.io/projected/e114f31f-e20e-454f-a6fc-f869f86e58d0-kube-api-access-pxdcp\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.096304 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-utilities\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.096353 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-catalog-content\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.130520 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxdcp\" (UniqueName: \"kubernetes.io/projected/e114f31f-e20e-454f-a6fc-f869f86e58d0-kube-api-access-pxdcp\") pod \"redhat-operators-8r4vb\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.230847 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:28 crc kubenswrapper[4680]: I0126 17:34:28.778303 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8r4vb"] Jan 26 17:34:29 crc kubenswrapper[4680]: I0126 17:34:29.055598 4680 generic.go:334] "Generic (PLEG): container finished" podID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerID="10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc" exitCode=0 Jan 26 17:34:29 crc kubenswrapper[4680]: I0126 17:34:29.055655 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerDied","Data":"10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc"} Jan 26 17:34:29 crc kubenswrapper[4680]: I0126 17:34:29.055725 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerStarted","Data":"637674f18da5adbeeb8bf8923601e3ae2ed5dd91361510bcfc1251af7487027d"} Jan 26 17:34:31 crc kubenswrapper[4680]: I0126 17:34:31.080688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerStarted","Data":"53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262"} Jan 26 17:34:33 crc kubenswrapper[4680]: I0126 17:34:33.822737 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:34:33 crc kubenswrapper[4680]: I0126 17:34:33.824956 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="6e6f45ac-80ed-41f2-b9b8-94e60a1656d4" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:34:35 crc kubenswrapper[4680]: I0126 17:34:35.116588 4680 generic.go:334] "Generic (PLEG): container finished" podID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerID="53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262" exitCode=0 Jan 26 17:34:35 crc kubenswrapper[4680]: I0126 17:34:35.116677 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerDied","Data":"53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262"} Jan 26 17:34:36 crc kubenswrapper[4680]: I0126 17:34:36.126264 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerStarted","Data":"a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6"} Jan 26 17:34:36 crc kubenswrapper[4680]: I0126 17:34:36.149204 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8r4vb" podStartSLOduration=2.638954653 podStartE2EDuration="9.14918697s" podCreationTimestamp="2026-01-26 17:34:27 +0000 UTC" firstStartedPulling="2026-01-26 17:34:29.058095703 +0000 UTC m=+5344.219367972" lastFinishedPulling="2026-01-26 17:34:35.56832802 +0000 UTC m=+5350.729600289" observedRunningTime="2026-01-26 17:34:36.144800615 +0000 UTC m=+5351.306072894" watchObservedRunningTime="2026-01-26 17:34:36.14918697 +0000 UTC m=+5351.310459239" Jan 26 17:34:38 crc kubenswrapper[4680]: I0126 17:34:38.169516 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:34:38 crc kubenswrapper[4680]: E0126 17:34:38.169908 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:34:38 crc kubenswrapper[4680]: I0126 17:34:38.231026 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:38 crc kubenswrapper[4680]: I0126 17:34:38.231371 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:39 crc kubenswrapper[4680]: I0126 17:34:39.279135 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8r4vb" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="registry-server" probeResult="failure" output=< Jan 26 17:34:39 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:34:39 crc kubenswrapper[4680]: > Jan 26 17:34:48 crc kubenswrapper[4680]: I0126 17:34:48.303999 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:48 crc kubenswrapper[4680]: I0126 17:34:48.389858 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:48 crc kubenswrapper[4680]: I0126 17:34:48.561430 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8r4vb"] Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.251718 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8r4vb" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="registry-server" containerID="cri-o://a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6" gracePeriod=2 Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.752586 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.778933 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxdcp\" (UniqueName: \"kubernetes.io/projected/e114f31f-e20e-454f-a6fc-f869f86e58d0-kube-api-access-pxdcp\") pod \"e114f31f-e20e-454f-a6fc-f869f86e58d0\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.779034 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-catalog-content\") pod \"e114f31f-e20e-454f-a6fc-f869f86e58d0\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.779155 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-utilities\") pod \"e114f31f-e20e-454f-a6fc-f869f86e58d0\" (UID: \"e114f31f-e20e-454f-a6fc-f869f86e58d0\") " Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.780336 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-utilities" (OuterVolumeSpecName: "utilities") pod "e114f31f-e20e-454f-a6fc-f869f86e58d0" (UID: "e114f31f-e20e-454f-a6fc-f869f86e58d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.793018 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e114f31f-e20e-454f-a6fc-f869f86e58d0-kube-api-access-pxdcp" (OuterVolumeSpecName: "kube-api-access-pxdcp") pod "e114f31f-e20e-454f-a6fc-f869f86e58d0" (UID: "e114f31f-e20e-454f-a6fc-f869f86e58d0"). InnerVolumeSpecName "kube-api-access-pxdcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.881235 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxdcp\" (UniqueName: \"kubernetes.io/projected/e114f31f-e20e-454f-a6fc-f869f86e58d0-kube-api-access-pxdcp\") on node \"crc\" DevicePath \"\"" Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.881267 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.912603 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e114f31f-e20e-454f-a6fc-f869f86e58d0" (UID: "e114f31f-e20e-454f-a6fc-f869f86e58d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:34:50 crc kubenswrapper[4680]: I0126 17:34:50.983556 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e114f31f-e20e-454f-a6fc-f869f86e58d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.266979 4680 generic.go:334] "Generic (PLEG): container finished" podID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerID="a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6" exitCode=0 Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.267031 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerDied","Data":"a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6"} Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.267059 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r4vb" event={"ID":"e114f31f-e20e-454f-a6fc-f869f86e58d0","Type":"ContainerDied","Data":"637674f18da5adbeeb8bf8923601e3ae2ed5dd91361510bcfc1251af7487027d"} Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.267096 4680 scope.go:117] "RemoveContainer" containerID="a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.267244 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r4vb" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.298879 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8r4vb"] Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.313186 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8r4vb"] Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.323151 4680 scope.go:117] "RemoveContainer" containerID="53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.350758 4680 scope.go:117] "RemoveContainer" containerID="10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.384405 4680 scope.go:117] "RemoveContainer" containerID="a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6" Jan 26 17:34:51 crc kubenswrapper[4680]: E0126 17:34:51.385171 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6\": container with ID starting with a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6 not found: ID does not exist" containerID="a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.385199 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6"} err="failed to get container status \"a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6\": rpc error: code = NotFound desc = could not find container \"a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6\": container with ID starting with a6184fc65228cc831104a82662429ad8778cb8040c17c9522064d14652c7e2d6 not found: ID does not exist" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.385217 4680 scope.go:117] "RemoveContainer" containerID="53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262" Jan 26 17:34:51 crc kubenswrapper[4680]: E0126 17:34:51.385473 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262\": container with ID starting with 53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262 not found: ID does not exist" containerID="53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.385487 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262"} err="failed to get container status \"53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262\": rpc error: code = NotFound desc = could not find container \"53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262\": container with ID starting with 53b27ee29efcf2b683c35b3f15656fa1275990cb1031bb9977f6f1015352c262 not found: ID does not exist" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.385497 4680 scope.go:117] "RemoveContainer" containerID="10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc" Jan 26 17:34:51 crc kubenswrapper[4680]: E0126 17:34:51.385665 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc\": container with ID starting with 10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc not found: ID does not exist" containerID="10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc" Jan 26 17:34:51 crc kubenswrapper[4680]: I0126 17:34:51.385681 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc"} err="failed to get container status \"10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc\": rpc error: code = NotFound desc = could not find container \"10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc\": container with ID starting with 10edc927254c91cf2eab95d1c3ccb1e50b65863c753dbc874df6f1f48a659ecc not found: ID does not exist" Jan 26 17:34:53 crc kubenswrapper[4680]: I0126 17:34:53.169813 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:34:53 crc kubenswrapper[4680]: E0126 17:34:53.170664 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:34:53 crc kubenswrapper[4680]: I0126 17:34:53.180567 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" path="/var/lib/kubelet/pods/e114f31f-e20e-454f-a6fc-f869f86e58d0/volumes" Jan 26 17:35:05 crc kubenswrapper[4680]: I0126 17:35:05.176479 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:35:05 crc kubenswrapper[4680]: E0126 17:35:05.178316 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:35:19 crc kubenswrapper[4680]: I0126 17:35:19.170015 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:35:19 crc kubenswrapper[4680]: E0126 17:35:19.171116 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:35:32 crc kubenswrapper[4680]: I0126 17:35:32.170473 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:35:32 crc kubenswrapper[4680]: E0126 17:35:32.171271 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:35:44 crc kubenswrapper[4680]: I0126 17:35:44.169929 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:35:44 crc kubenswrapper[4680]: E0126 17:35:44.170856 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:35:55 crc kubenswrapper[4680]: I0126 17:35:55.170058 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:35:55 crc kubenswrapper[4680]: E0126 17:35:55.170845 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:36:10 crc kubenswrapper[4680]: I0126 17:36:10.169304 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:36:10 crc kubenswrapper[4680]: E0126 17:36:10.170015 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:36:25 crc kubenswrapper[4680]: I0126 17:36:25.176066 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:36:25 crc kubenswrapper[4680]: E0126 17:36:25.176990 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:36:36 crc kubenswrapper[4680]: I0126 17:36:36.170191 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:36:36 crc kubenswrapper[4680]: E0126 17:36:36.171010 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:36:50 crc kubenswrapper[4680]: I0126 17:36:50.171238 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:36:50 crc kubenswrapper[4680]: E0126 17:36:50.172965 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:37:05 crc kubenswrapper[4680]: I0126 17:37:05.176464 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:37:05 crc kubenswrapper[4680]: E0126 17:37:05.177266 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:37:19 crc kubenswrapper[4680]: I0126 17:37:19.170058 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:37:19 crc kubenswrapper[4680]: E0126 17:37:19.170851 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:37:30 crc kubenswrapper[4680]: I0126 17:37:30.170436 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:37:30 crc kubenswrapper[4680]: E0126 17:37:30.171226 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:37:42 crc kubenswrapper[4680]: I0126 17:37:42.170395 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:37:42 crc kubenswrapper[4680]: E0126 17:37:42.171124 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:37:57 crc kubenswrapper[4680]: I0126 17:37:57.170062 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:37:57 crc kubenswrapper[4680]: E0126 17:37:57.170885 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:38:09 crc kubenswrapper[4680]: I0126 17:38:09.170123 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:38:09 crc kubenswrapper[4680]: E0126 17:38:09.170982 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:38:20 crc kubenswrapper[4680]: I0126 17:38:20.170052 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:38:21 crc kubenswrapper[4680]: I0126 17:38:21.069123 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"f7febf756be66c069882f6cd8ee798cda0d501f22e79bbcea7b21e06958ba2e0"} Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.190333 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9wjfl"] Jan 26 17:40:32 crc kubenswrapper[4680]: E0126 17:40:32.191300 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="extract-utilities" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.191317 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="extract-utilities" Jan 26 17:40:32 crc kubenswrapper[4680]: E0126 17:40:32.191347 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="extract-content" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.191370 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="extract-content" Jan 26 17:40:32 crc kubenswrapper[4680]: E0126 17:40:32.191388 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="registry-server" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.191395 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="registry-server" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.191640 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e114f31f-e20e-454f-a6fc-f869f86e58d0" containerName="registry-server" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.193390 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.218790 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wjfl"] Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.242236 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-utilities\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.242287 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-catalog-content\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.242444 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2rm\" (UniqueName: \"kubernetes.io/projected/5e1ec552-4d97-412f-bbb5-f3a76a8de293-kube-api-access-xw2rm\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.344191 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw2rm\" (UniqueName: \"kubernetes.io/projected/5e1ec552-4d97-412f-bbb5-f3a76a8de293-kube-api-access-xw2rm\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.344274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-utilities\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.344303 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-catalog-content\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.344963 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-catalog-content\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.345016 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-utilities\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.366947 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw2rm\" (UniqueName: \"kubernetes.io/projected/5e1ec552-4d97-412f-bbb5-f3a76a8de293-kube-api-access-xw2rm\") pod \"redhat-marketplace-9wjfl\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:32 crc kubenswrapper[4680]: I0126 17:40:32.524833 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:33 crc kubenswrapper[4680]: I0126 17:40:33.036907 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wjfl"] Jan 26 17:40:33 crc kubenswrapper[4680]: I0126 17:40:33.255490 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerStarted","Data":"a7dbcadf793f34b64198ced4bda8a238e8243d2a2c0d5e0684a7943ffc154d9d"} Jan 26 17:40:34 crc kubenswrapper[4680]: I0126 17:40:34.264695 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerID="647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001" exitCode=0 Jan 26 17:40:34 crc kubenswrapper[4680]: I0126 17:40:34.264743 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerDied","Data":"647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001"} Jan 26 17:40:34 crc kubenswrapper[4680]: I0126 17:40:34.266751 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:40:35 crc kubenswrapper[4680]: I0126 17:40:35.292047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerStarted","Data":"7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b"} Jan 26 17:40:36 crc kubenswrapper[4680]: I0126 17:40:36.301218 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerID="7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b" exitCode=0 Jan 26 17:40:36 crc kubenswrapper[4680]: I0126 17:40:36.301262 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerDied","Data":"7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b"} Jan 26 17:40:37 crc kubenswrapper[4680]: I0126 17:40:37.312255 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerStarted","Data":"c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618"} Jan 26 17:40:37 crc kubenswrapper[4680]: I0126 17:40:37.341001 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9wjfl" podStartSLOduration=2.896391065 podStartE2EDuration="5.340982936s" podCreationTimestamp="2026-01-26 17:40:32 +0000 UTC" firstStartedPulling="2026-01-26 17:40:34.266176131 +0000 UTC m=+5709.427448420" lastFinishedPulling="2026-01-26 17:40:36.710768022 +0000 UTC m=+5711.872040291" observedRunningTime="2026-01-26 17:40:37.328969095 +0000 UTC m=+5712.490241354" watchObservedRunningTime="2026-01-26 17:40:37.340982936 +0000 UTC m=+5712.502255205" Jan 26 17:40:42 crc kubenswrapper[4680]: I0126 17:40:42.525266 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:42 crc kubenswrapper[4680]: I0126 17:40:42.525856 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:42 crc kubenswrapper[4680]: I0126 17:40:42.574978 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:43 crc kubenswrapper[4680]: I0126 17:40:43.413536 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:43 crc kubenswrapper[4680]: I0126 17:40:43.477628 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wjfl"] Jan 26 17:40:45 crc kubenswrapper[4680]: I0126 17:40:45.373574 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9wjfl" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="registry-server" containerID="cri-o://c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618" gracePeriod=2 Jan 26 17:40:45 crc kubenswrapper[4680]: I0126 17:40:45.871559 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.044038 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw2rm\" (UniqueName: \"kubernetes.io/projected/5e1ec552-4d97-412f-bbb5-f3a76a8de293-kube-api-access-xw2rm\") pod \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.044148 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-utilities\") pod \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.044175 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-catalog-content\") pod \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\" (UID: \"5e1ec552-4d97-412f-bbb5-f3a76a8de293\") " Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.045243 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-utilities" (OuterVolumeSpecName: "utilities") pod "5e1ec552-4d97-412f-bbb5-f3a76a8de293" (UID: "5e1ec552-4d97-412f-bbb5-f3a76a8de293"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.057251 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1ec552-4d97-412f-bbb5-f3a76a8de293-kube-api-access-xw2rm" (OuterVolumeSpecName: "kube-api-access-xw2rm") pod "5e1ec552-4d97-412f-bbb5-f3a76a8de293" (UID: "5e1ec552-4d97-412f-bbb5-f3a76a8de293"). InnerVolumeSpecName "kube-api-access-xw2rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.064164 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e1ec552-4d97-412f-bbb5-f3a76a8de293" (UID: "5e1ec552-4d97-412f-bbb5-f3a76a8de293"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.146834 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.146873 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ec552-4d97-412f-bbb5-f3a76a8de293-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.146886 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw2rm\" (UniqueName: \"kubernetes.io/projected/5e1ec552-4d97-412f-bbb5-f3a76a8de293-kube-api-access-xw2rm\") on node \"crc\" DevicePath \"\"" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.385789 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerID="c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618" exitCode=0 Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.385853 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wjfl" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.385882 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerDied","Data":"c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618"} Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.386186 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wjfl" event={"ID":"5e1ec552-4d97-412f-bbb5-f3a76a8de293","Type":"ContainerDied","Data":"a7dbcadf793f34b64198ced4bda8a238e8243d2a2c0d5e0684a7943ffc154d9d"} Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.386209 4680 scope.go:117] "RemoveContainer" containerID="c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.405827 4680 scope.go:117] "RemoveContainer" containerID="7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.421908 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wjfl"] Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.429007 4680 scope.go:117] "RemoveContainer" containerID="647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.435678 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wjfl"] Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.474185 4680 scope.go:117] "RemoveContainer" containerID="c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618" Jan 26 17:40:46 crc kubenswrapper[4680]: E0126 17:40:46.474725 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618\": container with ID starting with c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618 not found: ID does not exist" containerID="c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.474779 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618"} err="failed to get container status \"c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618\": rpc error: code = NotFound desc = could not find container \"c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618\": container with ID starting with c8da782df2d1d6c460abd8d42e5be2a8ea801c3b4090ae15b0db18e7490bd618 not found: ID does not exist" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.474808 4680 scope.go:117] "RemoveContainer" containerID="7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b" Jan 26 17:40:46 crc kubenswrapper[4680]: E0126 17:40:46.475318 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b\": container with ID starting with 7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b not found: ID does not exist" containerID="7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.475410 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b"} err="failed to get container status \"7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b\": rpc error: code = NotFound desc = could not find container \"7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b\": container with ID starting with 7f39df8049924d3b120de13e20494edce2b1babbc733a05279e015266151298b not found: ID does not exist" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.475496 4680 scope.go:117] "RemoveContainer" containerID="647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001" Jan 26 17:40:46 crc kubenswrapper[4680]: E0126 17:40:46.475856 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001\": container with ID starting with 647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001 not found: ID does not exist" containerID="647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.475881 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001"} err="failed to get container status \"647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001\": rpc error: code = NotFound desc = could not find container \"647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001\": container with ID starting with 647e313c3fe669a1e97efe5b51238e9c6de5ef4f2eb860f85fe3e1fb8b57e001 not found: ID does not exist" Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.980741 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:40:46 crc kubenswrapper[4680]: I0126 17:40:46.980854 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:40:47 crc kubenswrapper[4680]: I0126 17:40:47.180033 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" path="/var/lib/kubelet/pods/5e1ec552-4d97-412f-bbb5-f3a76a8de293/volumes" Jan 26 17:41:16 crc kubenswrapper[4680]: I0126 17:41:16.981427 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:41:16 crc kubenswrapper[4680]: I0126 17:41:16.982013 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:41:46 crc kubenswrapper[4680]: I0126 17:41:46.980695 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:41:46 crc kubenswrapper[4680]: I0126 17:41:46.981255 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:41:46 crc kubenswrapper[4680]: I0126 17:41:46.981295 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:41:46 crc kubenswrapper[4680]: I0126 17:41:46.981883 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7febf756be66c069882f6cd8ee798cda0d501f22e79bbcea7b21e06958ba2e0"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:41:46 crc kubenswrapper[4680]: I0126 17:41:46.981925 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://f7febf756be66c069882f6cd8ee798cda0d501f22e79bbcea7b21e06958ba2e0" gracePeriod=600 Jan 26 17:41:47 crc kubenswrapper[4680]: I0126 17:41:47.895988 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="f7febf756be66c069882f6cd8ee798cda0d501f22e79bbcea7b21e06958ba2e0" exitCode=0 Jan 26 17:41:47 crc kubenswrapper[4680]: I0126 17:41:47.896052 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"f7febf756be66c069882f6cd8ee798cda0d501f22e79bbcea7b21e06958ba2e0"} Jan 26 17:41:47 crc kubenswrapper[4680]: I0126 17:41:47.896524 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5"} Jan 26 17:41:47 crc kubenswrapper[4680]: I0126 17:41:47.896548 4680 scope.go:117] "RemoveContainer" containerID="d28ed51e9a70b5a50ca63bc8461dc410f89a28d82741298bc6b73bdeb10bafdc" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.314858 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j7qq8"] Jan 26 17:41:50 crc kubenswrapper[4680]: E0126 17:41:50.315702 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="registry-server" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.315716 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="registry-server" Jan 26 17:41:50 crc kubenswrapper[4680]: E0126 17:41:50.315738 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="extract-utilities" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.315744 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="extract-utilities" Jan 26 17:41:50 crc kubenswrapper[4680]: E0126 17:41:50.315755 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="extract-content" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.315762 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="extract-content" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.315949 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e1ec552-4d97-412f-bbb5-f3a76a8de293" containerName="registry-server" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.317637 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.344128 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7qq8"] Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.397444 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdmb\" (UniqueName: \"kubernetes.io/projected/5beb729d-c724-4641-b04c-6f13cd27b35f-kube-api-access-rfdmb\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.397504 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-catalog-content\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.397534 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-utilities\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.499186 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdmb\" (UniqueName: \"kubernetes.io/projected/5beb729d-c724-4641-b04c-6f13cd27b35f-kube-api-access-rfdmb\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.499232 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-catalog-content\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.499255 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-utilities\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.499770 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-catalog-content\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.499818 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-utilities\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.522910 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdmb\" (UniqueName: \"kubernetes.io/projected/5beb729d-c724-4641-b04c-6f13cd27b35f-kube-api-access-rfdmb\") pod \"certified-operators-j7qq8\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:50 crc kubenswrapper[4680]: I0126 17:41:50.640775 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:41:51 crc kubenswrapper[4680]: I0126 17:41:51.181475 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7qq8"] Jan 26 17:41:51 crc kubenswrapper[4680]: I0126 17:41:51.930794 4680 generic.go:334] "Generic (PLEG): container finished" podID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerID="14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112" exitCode=0 Jan 26 17:41:51 crc kubenswrapper[4680]: I0126 17:41:51.930848 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerDied","Data":"14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112"} Jan 26 17:41:51 crc kubenswrapper[4680]: I0126 17:41:51.931161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerStarted","Data":"29404f7ee6ea339008441c0366bcc3a6420efe655251d6c402b47be3fc3ef84b"} Jan 26 17:41:53 crc kubenswrapper[4680]: I0126 17:41:53.966803 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerStarted","Data":"1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d"} Jan 26 17:41:54 crc kubenswrapper[4680]: I0126 17:41:54.976721 4680 generic.go:334] "Generic (PLEG): container finished" podID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerID="1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d" exitCode=0 Jan 26 17:41:54 crc kubenswrapper[4680]: I0126 17:41:54.976764 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerDied","Data":"1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d"} Jan 26 17:41:55 crc kubenswrapper[4680]: I0126 17:41:55.988955 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerStarted","Data":"16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121"} Jan 26 17:41:56 crc kubenswrapper[4680]: I0126 17:41:56.014928 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j7qq8" podStartSLOduration=2.57186751 podStartE2EDuration="6.014902258s" podCreationTimestamp="2026-01-26 17:41:50 +0000 UTC" firstStartedPulling="2026-01-26 17:41:51.933925807 +0000 UTC m=+5787.095198066" lastFinishedPulling="2026-01-26 17:41:55.376960545 +0000 UTC m=+5790.538232814" observedRunningTime="2026-01-26 17:41:56.004089082 +0000 UTC m=+5791.165361351" watchObservedRunningTime="2026-01-26 17:41:56.014902258 +0000 UTC m=+5791.176174527" Jan 26 17:42:00 crc kubenswrapper[4680]: I0126 17:42:00.641265 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:42:00 crc kubenswrapper[4680]: I0126 17:42:00.641799 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:42:00 crc kubenswrapper[4680]: I0126 17:42:00.695673 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:42:01 crc kubenswrapper[4680]: I0126 17:42:01.084165 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:42:01 crc kubenswrapper[4680]: I0126 17:42:01.132224 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7qq8"] Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.048491 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j7qq8" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="registry-server" containerID="cri-o://16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121" gracePeriod=2 Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.578722 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.651661 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-catalog-content\") pod \"5beb729d-c724-4641-b04c-6f13cd27b35f\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.651752 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfdmb\" (UniqueName: \"kubernetes.io/projected/5beb729d-c724-4641-b04c-6f13cd27b35f-kube-api-access-rfdmb\") pod \"5beb729d-c724-4641-b04c-6f13cd27b35f\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.651887 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-utilities\") pod \"5beb729d-c724-4641-b04c-6f13cd27b35f\" (UID: \"5beb729d-c724-4641-b04c-6f13cd27b35f\") " Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.653010 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-utilities" (OuterVolumeSpecName: "utilities") pod "5beb729d-c724-4641-b04c-6f13cd27b35f" (UID: "5beb729d-c724-4641-b04c-6f13cd27b35f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.657718 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5beb729d-c724-4641-b04c-6f13cd27b35f-kube-api-access-rfdmb" (OuterVolumeSpecName: "kube-api-access-rfdmb") pod "5beb729d-c724-4641-b04c-6f13cd27b35f" (UID: "5beb729d-c724-4641-b04c-6f13cd27b35f"). InnerVolumeSpecName "kube-api-access-rfdmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.699852 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5beb729d-c724-4641-b04c-6f13cd27b35f" (UID: "5beb729d-c724-4641-b04c-6f13cd27b35f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.753748 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfdmb\" (UniqueName: \"kubernetes.io/projected/5beb729d-c724-4641-b04c-6f13cd27b35f-kube-api-access-rfdmb\") on node \"crc\" DevicePath \"\"" Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.753782 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:42:03 crc kubenswrapper[4680]: I0126 17:42:03.753791 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5beb729d-c724-4641-b04c-6f13cd27b35f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.058661 4680 generic.go:334] "Generic (PLEG): container finished" podID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerID="16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121" exitCode=0 Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.058704 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerDied","Data":"16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121"} Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.058731 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7qq8" event={"ID":"5beb729d-c724-4641-b04c-6f13cd27b35f","Type":"ContainerDied","Data":"29404f7ee6ea339008441c0366bcc3a6420efe655251d6c402b47be3fc3ef84b"} Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.058750 4680 scope.go:117] "RemoveContainer" containerID="16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.058791 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7qq8" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.083195 4680 scope.go:117] "RemoveContainer" containerID="1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.098227 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7qq8"] Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.108468 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j7qq8"] Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.115187 4680 scope.go:117] "RemoveContainer" containerID="14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.154679 4680 scope.go:117] "RemoveContainer" containerID="16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121" Jan 26 17:42:04 crc kubenswrapper[4680]: E0126 17:42:04.155457 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121\": container with ID starting with 16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121 not found: ID does not exist" containerID="16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.155506 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121"} err="failed to get container status \"16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121\": rpc error: code = NotFound desc = could not find container \"16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121\": container with ID starting with 16ceb432c01cc56cecbfc54883501e173e628e49179f3d53b8ff44df12ef6121 not found: ID does not exist" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.155527 4680 scope.go:117] "RemoveContainer" containerID="1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d" Jan 26 17:42:04 crc kubenswrapper[4680]: E0126 17:42:04.155993 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d\": container with ID starting with 1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d not found: ID does not exist" containerID="1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.156043 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d"} err="failed to get container status \"1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d\": rpc error: code = NotFound desc = could not find container \"1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d\": container with ID starting with 1a1ad336877bb9c2f9310a3ce65fffe0bf7dd3fdef034e875408803ea21fcd4d not found: ID does not exist" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.156106 4680 scope.go:117] "RemoveContainer" containerID="14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112" Jan 26 17:42:04 crc kubenswrapper[4680]: E0126 17:42:04.156387 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112\": container with ID starting with 14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112 not found: ID does not exist" containerID="14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112" Jan 26 17:42:04 crc kubenswrapper[4680]: I0126 17:42:04.156436 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112"} err="failed to get container status \"14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112\": rpc error: code = NotFound desc = could not find container \"14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112\": container with ID starting with 14eaa310d2e7b5f4a78fe0dae95863e7216cf3e066cc024e30fcb8485bd2b112 not found: ID does not exist" Jan 26 17:42:05 crc kubenswrapper[4680]: I0126 17:42:05.181544 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" path="/var/lib/kubelet/pods/5beb729d-c724-4641-b04c-6f13cd27b35f/volumes" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.423405 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hv6b"] Jan 26 17:43:39 crc kubenswrapper[4680]: E0126 17:43:39.425513 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="extract-content" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.429373 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="extract-content" Jan 26 17:43:39 crc kubenswrapper[4680]: E0126 17:43:39.429476 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="extract-utilities" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.429548 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="extract-utilities" Jan 26 17:43:39 crc kubenswrapper[4680]: E0126 17:43:39.429643 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="registry-server" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.429727 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="registry-server" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.430213 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5beb729d-c724-4641-b04c-6f13cd27b35f" containerName="registry-server" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.431724 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.439414 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hv6b"] Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.530244 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5p86\" (UniqueName: \"kubernetes.io/projected/92e781f2-bc09-4b6a-bb51-f11d632350b0-kube-api-access-c5p86\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.530292 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-catalog-content\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.530419 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-utilities\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.632716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5p86\" (UniqueName: \"kubernetes.io/projected/92e781f2-bc09-4b6a-bb51-f11d632350b0-kube-api-access-c5p86\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.632776 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-catalog-content\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.632880 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-utilities\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.633435 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-catalog-content\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.633514 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-utilities\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.655632 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5p86\" (UniqueName: \"kubernetes.io/projected/92e781f2-bc09-4b6a-bb51-f11d632350b0-kube-api-access-c5p86\") pod \"community-operators-2hv6b\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:39 crc kubenswrapper[4680]: I0126 17:43:39.753793 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:40 crc kubenswrapper[4680]: I0126 17:43:40.316500 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hv6b"] Jan 26 17:43:40 crc kubenswrapper[4680]: I0126 17:43:40.925733 4680 generic.go:334] "Generic (PLEG): container finished" podID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerID="304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb" exitCode=0 Jan 26 17:43:40 crc kubenswrapper[4680]: I0126 17:43:40.925797 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerDied","Data":"304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb"} Jan 26 17:43:40 crc kubenswrapper[4680]: I0126 17:43:40.925838 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerStarted","Data":"186fd9c1d3f9cd7aec1f4824437f930b2b5d8ef0bef50ebc20228426674127dd"} Jan 26 17:43:41 crc kubenswrapper[4680]: I0126 17:43:41.936488 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerStarted","Data":"53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4"} Jan 26 17:43:43 crc kubenswrapper[4680]: I0126 17:43:43.956960 4680 generic.go:334] "Generic (PLEG): container finished" podID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerID="53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4" exitCode=0 Jan 26 17:43:43 crc kubenswrapper[4680]: I0126 17:43:43.957042 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerDied","Data":"53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4"} Jan 26 17:43:44 crc kubenswrapper[4680]: I0126 17:43:44.974330 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerStarted","Data":"5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a"} Jan 26 17:43:45 crc kubenswrapper[4680]: I0126 17:43:45.005455 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hv6b" podStartSLOduration=2.493891925 podStartE2EDuration="6.005438216s" podCreationTimestamp="2026-01-26 17:43:39 +0000 UTC" firstStartedPulling="2026-01-26 17:43:40.92748659 +0000 UTC m=+5896.088758859" lastFinishedPulling="2026-01-26 17:43:44.439032841 +0000 UTC m=+5899.600305150" observedRunningTime="2026-01-26 17:43:44.999458256 +0000 UTC m=+5900.160730525" watchObservedRunningTime="2026-01-26 17:43:45.005438216 +0000 UTC m=+5900.166710495" Jan 26 17:43:49 crc kubenswrapper[4680]: I0126 17:43:49.755091 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:49 crc kubenswrapper[4680]: I0126 17:43:49.756636 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:49 crc kubenswrapper[4680]: I0126 17:43:49.811023 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:50 crc kubenswrapper[4680]: I0126 17:43:50.086363 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:50 crc kubenswrapper[4680]: I0126 17:43:50.162886 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hv6b"] Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.047009 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2hv6b" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="registry-server" containerID="cri-o://5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a" gracePeriod=2 Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.610348 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.766856 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-catalog-content\") pod \"92e781f2-bc09-4b6a-bb51-f11d632350b0\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.767026 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-utilities\") pod \"92e781f2-bc09-4b6a-bb51-f11d632350b0\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.767174 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5p86\" (UniqueName: \"kubernetes.io/projected/92e781f2-bc09-4b6a-bb51-f11d632350b0-kube-api-access-c5p86\") pod \"92e781f2-bc09-4b6a-bb51-f11d632350b0\" (UID: \"92e781f2-bc09-4b6a-bb51-f11d632350b0\") " Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.767761 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-utilities" (OuterVolumeSpecName: "utilities") pod "92e781f2-bc09-4b6a-bb51-f11d632350b0" (UID: "92e781f2-bc09-4b6a-bb51-f11d632350b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.774689 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e781f2-bc09-4b6a-bb51-f11d632350b0-kube-api-access-c5p86" (OuterVolumeSpecName: "kube-api-access-c5p86") pod "92e781f2-bc09-4b6a-bb51-f11d632350b0" (UID: "92e781f2-bc09-4b6a-bb51-f11d632350b0"). InnerVolumeSpecName "kube-api-access-c5p86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.839445 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92e781f2-bc09-4b6a-bb51-f11d632350b0" (UID: "92e781f2-bc09-4b6a-bb51-f11d632350b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.870072 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.870122 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5p86\" (UniqueName: \"kubernetes.io/projected/92e781f2-bc09-4b6a-bb51-f11d632350b0-kube-api-access-c5p86\") on node \"crc\" DevicePath \"\"" Jan 26 17:43:52 crc kubenswrapper[4680]: I0126 17:43:52.870133 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e781f2-bc09-4b6a-bb51-f11d632350b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.063988 4680 generic.go:334] "Generic (PLEG): container finished" podID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerID="5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a" exitCode=0 Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.064160 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerDied","Data":"5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a"} Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.064536 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hv6b" event={"ID":"92e781f2-bc09-4b6a-bb51-f11d632350b0","Type":"ContainerDied","Data":"186fd9c1d3f9cd7aec1f4824437f930b2b5d8ef0bef50ebc20228426674127dd"} Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.064568 4680 scope.go:117] "RemoveContainer" containerID="5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.064275 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hv6b" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.111691 4680 scope.go:117] "RemoveContainer" containerID="53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.137180 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hv6b"] Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.143354 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2hv6b"] Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.152719 4680 scope.go:117] "RemoveContainer" containerID="304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.192580 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" path="/var/lib/kubelet/pods/92e781f2-bc09-4b6a-bb51-f11d632350b0/volumes" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.209124 4680 scope.go:117] "RemoveContainer" containerID="5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a" Jan 26 17:43:53 crc kubenswrapper[4680]: E0126 17:43:53.209670 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a\": container with ID starting with 5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a not found: ID does not exist" containerID="5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.209716 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a"} err="failed to get container status \"5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a\": rpc error: code = NotFound desc = could not find container \"5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a\": container with ID starting with 5403f6cfdb4bbe1f526098838eb7d6862a41db8170f247c10e8c06dc248f992a not found: ID does not exist" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.209747 4680 scope.go:117] "RemoveContainer" containerID="53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4" Jan 26 17:43:53 crc kubenswrapper[4680]: E0126 17:43:53.210021 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4\": container with ID starting with 53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4 not found: ID does not exist" containerID="53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.210043 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4"} err="failed to get container status \"53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4\": rpc error: code = NotFound desc = could not find container \"53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4\": container with ID starting with 53c8983de6ad870824cbe066f38b2690f469f701c031b53c9ae7f80c641666f4 not found: ID does not exist" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.210060 4680 scope.go:117] "RemoveContainer" containerID="304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb" Jan 26 17:43:53 crc kubenswrapper[4680]: E0126 17:43:53.210404 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb\": container with ID starting with 304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb not found: ID does not exist" containerID="304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb" Jan 26 17:43:53 crc kubenswrapper[4680]: I0126 17:43:53.210430 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb"} err="failed to get container status \"304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb\": rpc error: code = NotFound desc = could not find container \"304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb\": container with ID starting with 304987fc9007966c87eb1da951788b2710c7b2b9cb04a2ab8af721936e9bc1fb not found: ID does not exist" Jan 26 17:44:16 crc kubenswrapper[4680]: I0126 17:44:16.981486 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:44:16 crc kubenswrapper[4680]: I0126 17:44:16.982187 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.555922 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tjhkm"] Jan 26 17:44:29 crc kubenswrapper[4680]: E0126 17:44:29.557513 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="extract-utilities" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.557550 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="extract-utilities" Jan 26 17:44:29 crc kubenswrapper[4680]: E0126 17:44:29.557588 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="registry-server" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.557601 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="registry-server" Jan 26 17:44:29 crc kubenswrapper[4680]: E0126 17:44:29.557663 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="extract-content" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.557677 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="extract-content" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.558050 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e781f2-bc09-4b6a-bb51-f11d632350b0" containerName="registry-server" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.560392 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.567438 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tjhkm"] Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.707054 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-catalog-content\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.707126 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-utilities\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.707158 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7r9\" (UniqueName: \"kubernetes.io/projected/159416cb-1558-48a2-8c2e-03f8ecc68f41-kube-api-access-nq7r9\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.809364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-catalog-content\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.809441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-utilities\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.809479 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq7r9\" (UniqueName: \"kubernetes.io/projected/159416cb-1558-48a2-8c2e-03f8ecc68f41-kube-api-access-nq7r9\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.810078 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-utilities\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.810053 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-catalog-content\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.847213 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq7r9\" (UniqueName: \"kubernetes.io/projected/159416cb-1558-48a2-8c2e-03f8ecc68f41-kube-api-access-nq7r9\") pod \"redhat-operators-tjhkm\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:29 crc kubenswrapper[4680]: I0126 17:44:29.888056 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:30 crc kubenswrapper[4680]: I0126 17:44:30.649692 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tjhkm"] Jan 26 17:44:31 crc kubenswrapper[4680]: I0126 17:44:31.454589 4680 generic.go:334] "Generic (PLEG): container finished" podID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerID="ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89" exitCode=0 Jan 26 17:44:31 crc kubenswrapper[4680]: I0126 17:44:31.454800 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerDied","Data":"ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89"} Jan 26 17:44:31 crc kubenswrapper[4680]: I0126 17:44:31.454889 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerStarted","Data":"e427910efc0b6ba537356027ef59ae4126b74c09cf5f5c60f0b68348bce342dc"} Jan 26 17:44:32 crc kubenswrapper[4680]: I0126 17:44:32.464355 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerStarted","Data":"6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef"} Jan 26 17:44:36 crc kubenswrapper[4680]: I0126 17:44:36.501994 4680 generic.go:334] "Generic (PLEG): container finished" podID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerID="6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef" exitCode=0 Jan 26 17:44:36 crc kubenswrapper[4680]: I0126 17:44:36.502095 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerDied","Data":"6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef"} Jan 26 17:44:37 crc kubenswrapper[4680]: I0126 17:44:37.512381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerStarted","Data":"e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a"} Jan 26 17:44:37 crc kubenswrapper[4680]: I0126 17:44:37.539574 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tjhkm" podStartSLOduration=3.096955346 podStartE2EDuration="8.539555754s" podCreationTimestamp="2026-01-26 17:44:29 +0000 UTC" firstStartedPulling="2026-01-26 17:44:31.456320737 +0000 UTC m=+5946.617593006" lastFinishedPulling="2026-01-26 17:44:36.898921145 +0000 UTC m=+5952.060193414" observedRunningTime="2026-01-26 17:44:37.535516429 +0000 UTC m=+5952.696788688" watchObservedRunningTime="2026-01-26 17:44:37.539555754 +0000 UTC m=+5952.700828023" Jan 26 17:44:39 crc kubenswrapper[4680]: I0126 17:44:39.889354 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:39 crc kubenswrapper[4680]: I0126 17:44:39.889663 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:40 crc kubenswrapper[4680]: I0126 17:44:40.937667 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tjhkm" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="registry-server" probeResult="failure" output=< Jan 26 17:44:40 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 17:44:40 crc kubenswrapper[4680]: > Jan 26 17:44:46 crc kubenswrapper[4680]: I0126 17:44:46.983498 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:44:46 crc kubenswrapper[4680]: I0126 17:44:46.984009 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:44:49 crc kubenswrapper[4680]: I0126 17:44:49.947055 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:49 crc kubenswrapper[4680]: I0126 17:44:49.999477 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:50 crc kubenswrapper[4680]: I0126 17:44:50.184163 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tjhkm"] Jan 26 17:44:51 crc kubenswrapper[4680]: I0126 17:44:51.618939 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tjhkm" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="registry-server" containerID="cri-o://e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a" gracePeriod=2 Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.155773 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.318833 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq7r9\" (UniqueName: \"kubernetes.io/projected/159416cb-1558-48a2-8c2e-03f8ecc68f41-kube-api-access-nq7r9\") pod \"159416cb-1558-48a2-8c2e-03f8ecc68f41\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.319249 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-catalog-content\") pod \"159416cb-1558-48a2-8c2e-03f8ecc68f41\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.319288 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-utilities\") pod \"159416cb-1558-48a2-8c2e-03f8ecc68f41\" (UID: \"159416cb-1558-48a2-8c2e-03f8ecc68f41\") " Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.320506 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-utilities" (OuterVolumeSpecName: "utilities") pod "159416cb-1558-48a2-8c2e-03f8ecc68f41" (UID: "159416cb-1558-48a2-8c2e-03f8ecc68f41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.335753 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159416cb-1558-48a2-8c2e-03f8ecc68f41-kube-api-access-nq7r9" (OuterVolumeSpecName: "kube-api-access-nq7r9") pod "159416cb-1558-48a2-8c2e-03f8ecc68f41" (UID: "159416cb-1558-48a2-8c2e-03f8ecc68f41"). InnerVolumeSpecName "kube-api-access-nq7r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.422777 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq7r9\" (UniqueName: \"kubernetes.io/projected/159416cb-1558-48a2-8c2e-03f8ecc68f41-kube-api-access-nq7r9\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.422817 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.452705 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "159416cb-1558-48a2-8c2e-03f8ecc68f41" (UID: "159416cb-1558-48a2-8c2e-03f8ecc68f41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.525650 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159416cb-1558-48a2-8c2e-03f8ecc68f41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.633053 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerDied","Data":"e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a"} Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.633156 4680 scope.go:117] "RemoveContainer" containerID="e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.633200 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjhkm" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.632771 4680 generic.go:334] "Generic (PLEG): container finished" podID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerID="e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a" exitCode=0 Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.633935 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjhkm" event={"ID":"159416cb-1558-48a2-8c2e-03f8ecc68f41","Type":"ContainerDied","Data":"e427910efc0b6ba537356027ef59ae4126b74c09cf5f5c60f0b68348bce342dc"} Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.668518 4680 scope.go:117] "RemoveContainer" containerID="6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.679320 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tjhkm"] Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.691371 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tjhkm"] Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.734294 4680 scope.go:117] "RemoveContainer" containerID="ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.762948 4680 scope.go:117] "RemoveContainer" containerID="e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a" Jan 26 17:44:52 crc kubenswrapper[4680]: E0126 17:44:52.765635 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a\": container with ID starting with e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a not found: ID does not exist" containerID="e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.765694 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a"} err="failed to get container status \"e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a\": rpc error: code = NotFound desc = could not find container \"e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a\": container with ID starting with e64d27304e06862c742243154a1f57d3acf163eba9f564b2f70b5ccb7ec48c4a not found: ID does not exist" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.765732 4680 scope.go:117] "RemoveContainer" containerID="6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef" Jan 26 17:44:52 crc kubenswrapper[4680]: E0126 17:44:52.766479 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef\": container with ID starting with 6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef not found: ID does not exist" containerID="6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.766559 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef"} err="failed to get container status \"6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef\": rpc error: code = NotFound desc = could not find container \"6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef\": container with ID starting with 6ab3675c122571abaa91786ec9ac3b1fc3c270277927a1ea9987c44ed3dffeef not found: ID does not exist" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.766633 4680 scope.go:117] "RemoveContainer" containerID="ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89" Jan 26 17:44:52 crc kubenswrapper[4680]: E0126 17:44:52.767166 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89\": container with ID starting with ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89 not found: ID does not exist" containerID="ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89" Jan 26 17:44:52 crc kubenswrapper[4680]: I0126 17:44:52.767219 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89"} err="failed to get container status \"ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89\": rpc error: code = NotFound desc = could not find container \"ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89\": container with ID starting with ac5f80dfb1bbe1fdd5d10879ef52bdabf94ea9c2b23f8c2d6a403f762748de89 not found: ID does not exist" Jan 26 17:44:53 crc kubenswrapper[4680]: I0126 17:44:53.181584 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" path="/var/lib/kubelet/pods/159416cb-1558-48a2-8c2e-03f8ecc68f41/volumes" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.223795 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz"] Jan 26 17:45:00 crc kubenswrapper[4680]: E0126 17:45:00.224659 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="extract-utilities" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.224672 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="extract-utilities" Jan 26 17:45:00 crc kubenswrapper[4680]: E0126 17:45:00.224692 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="registry-server" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.224698 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="registry-server" Jan 26 17:45:00 crc kubenswrapper[4680]: E0126 17:45:00.224709 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="extract-content" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.224715 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="extract-content" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.224904 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="159416cb-1558-48a2-8c2e-03f8ecc68f41" containerName="registry-server" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.225632 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.235769 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz"] Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.300442 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.300444 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.409314 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-secret-volume\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.409387 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnzd4\" (UniqueName: \"kubernetes.io/projected/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-kube-api-access-fnzd4\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.409726 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-config-volume\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.511336 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-secret-volume\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.512489 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzd4\" (UniqueName: \"kubernetes.io/projected/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-kube-api-access-fnzd4\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.512718 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-config-volume\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.513673 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-config-volume\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.518493 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-secret-volume\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.530662 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzd4\" (UniqueName: \"kubernetes.io/projected/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-kube-api-access-fnzd4\") pod \"collect-profiles-29490825-lcqjz\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:00 crc kubenswrapper[4680]: I0126 17:45:00.553439 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:01 crc kubenswrapper[4680]: I0126 17:45:01.067305 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz"] Jan 26 17:45:01 crc kubenswrapper[4680]: I0126 17:45:01.710689 4680 generic.go:334] "Generic (PLEG): container finished" podID="7c59e01a-f156-45c4-bdfe-1e1abaabaf84" containerID="c5c9254e0a030802c08a6cae10c595b612102cc628e7c41da2fd05835beb9ddc" exitCode=0 Jan 26 17:45:01 crc kubenswrapper[4680]: I0126 17:45:01.710935 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" event={"ID":"7c59e01a-f156-45c4-bdfe-1e1abaabaf84","Type":"ContainerDied","Data":"c5c9254e0a030802c08a6cae10c595b612102cc628e7c41da2fd05835beb9ddc"} Jan 26 17:45:01 crc kubenswrapper[4680]: I0126 17:45:01.710958 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" event={"ID":"7c59e01a-f156-45c4-bdfe-1e1abaabaf84","Type":"ContainerStarted","Data":"9ffd249d44469a921998aed5778b7ec83dd926a85061fb86f86732656dbaf59c"} Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.115721 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.267156 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-config-volume\") pod \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.267371 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnzd4\" (UniqueName: \"kubernetes.io/projected/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-kube-api-access-fnzd4\") pod \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.267488 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-secret-volume\") pod \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\" (UID: \"7c59e01a-f156-45c4-bdfe-1e1abaabaf84\") " Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.268633 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-config-volume" (OuterVolumeSpecName: "config-volume") pod "7c59e01a-f156-45c4-bdfe-1e1abaabaf84" (UID: "7c59e01a-f156-45c4-bdfe-1e1abaabaf84"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.274351 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-kube-api-access-fnzd4" (OuterVolumeSpecName: "kube-api-access-fnzd4") pod "7c59e01a-f156-45c4-bdfe-1e1abaabaf84" (UID: "7c59e01a-f156-45c4-bdfe-1e1abaabaf84"). InnerVolumeSpecName "kube-api-access-fnzd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.275228 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7c59e01a-f156-45c4-bdfe-1e1abaabaf84" (UID: "7c59e01a-f156-45c4-bdfe-1e1abaabaf84"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.370555 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.370584 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnzd4\" (UniqueName: \"kubernetes.io/projected/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-kube-api-access-fnzd4\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.370594 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c59e01a-f156-45c4-bdfe-1e1abaabaf84-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.738009 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" event={"ID":"7c59e01a-f156-45c4-bdfe-1e1abaabaf84","Type":"ContainerDied","Data":"9ffd249d44469a921998aed5778b7ec83dd926a85061fb86f86732656dbaf59c"} Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.738288 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz" Jan 26 17:45:03 crc kubenswrapper[4680]: I0126 17:45:03.738054 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffd249d44469a921998aed5778b7ec83dd926a85061fb86f86732656dbaf59c" Jan 26 17:45:04 crc kubenswrapper[4680]: I0126 17:45:04.212964 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg"] Jan 26 17:45:04 crc kubenswrapper[4680]: I0126 17:45:04.222455 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-z25rg"] Jan 26 17:45:05 crc kubenswrapper[4680]: I0126 17:45:05.183698 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3109f250-b24f-41e7-b633-21f6c63bdfae" path="/var/lib/kubelet/pods/3109f250-b24f-41e7-b633-21f6c63bdfae/volumes" Jan 26 17:45:16 crc kubenswrapper[4680]: I0126 17:45:16.980744 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:45:16 crc kubenswrapper[4680]: I0126 17:45:16.981184 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:45:16 crc kubenswrapper[4680]: I0126 17:45:16.981231 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:45:16 crc kubenswrapper[4680]: I0126 17:45:16.981975 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:45:16 crc kubenswrapper[4680]: I0126 17:45:16.982022 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" gracePeriod=600 Jan 26 17:45:17 crc kubenswrapper[4680]: E0126 17:45:17.116660 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:45:17 crc kubenswrapper[4680]: I0126 17:45:17.872873 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" exitCode=0 Jan 26 17:45:17 crc kubenswrapper[4680]: I0126 17:45:17.872994 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5"} Jan 26 17:45:17 crc kubenswrapper[4680]: I0126 17:45:17.873363 4680 scope.go:117] "RemoveContainer" containerID="f7febf756be66c069882f6cd8ee798cda0d501f22e79bbcea7b21e06958ba2e0" Jan 26 17:45:17 crc kubenswrapper[4680]: I0126 17:45:17.874378 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:45:17 crc kubenswrapper[4680]: E0126 17:45:17.874833 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:45:29 crc kubenswrapper[4680]: I0126 17:45:29.170422 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:45:29 crc kubenswrapper[4680]: E0126 17:45:29.171395 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:45:42 crc kubenswrapper[4680]: I0126 17:45:42.169565 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:45:42 crc kubenswrapper[4680]: E0126 17:45:42.172097 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:45:47 crc kubenswrapper[4680]: I0126 17:45:47.516261 4680 scope.go:117] "RemoveContainer" containerID="d78818e1459ff8593e23585c7bc747d0aa6db47f037d232f71687cdd6c9c8270" Jan 26 17:45:57 crc kubenswrapper[4680]: I0126 17:45:57.169823 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:45:57 crc kubenswrapper[4680]: E0126 17:45:57.170551 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:46:12 crc kubenswrapper[4680]: I0126 17:46:12.169781 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:46:12 crc kubenswrapper[4680]: E0126 17:46:12.170652 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:46:24 crc kubenswrapper[4680]: I0126 17:46:24.169705 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:46:24 crc kubenswrapper[4680]: E0126 17:46:24.170425 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:46:31 crc kubenswrapper[4680]: I0126 17:46:31.512539 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-w9dh6" podUID="c5ab6c1a-749f-4701-8de4-f3f1d53aaf0c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:31 crc kubenswrapper[4680]: I0126 17:46:31.520015 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podUID="ae5969bc-48f4-499f-9ca5-6858279a47d6" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:46:31 crc kubenswrapper[4680]: I0126 17:46:31.520030 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-v9qd8" podUID="ae5969bc-48f4-499f-9ca5-6858279a47d6" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:46:38 crc kubenswrapper[4680]: I0126 17:46:38.169854 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:46:38 crc kubenswrapper[4680]: E0126 17:46:38.170658 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:46:51 crc kubenswrapper[4680]: I0126 17:46:51.169644 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:46:51 crc kubenswrapper[4680]: E0126 17:46:51.170583 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:47:04 crc kubenswrapper[4680]: I0126 17:47:04.170261 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:47:04 crc kubenswrapper[4680]: E0126 17:47:04.171530 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:47:16 crc kubenswrapper[4680]: I0126 17:47:16.170057 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:47:16 crc kubenswrapper[4680]: E0126 17:47:16.170764 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:47:30 crc kubenswrapper[4680]: I0126 17:47:30.169195 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:47:30 crc kubenswrapper[4680]: E0126 17:47:30.170105 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:47:43 crc kubenswrapper[4680]: I0126 17:47:43.170569 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:47:43 crc kubenswrapper[4680]: E0126 17:47:43.171343 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:47:56 crc kubenswrapper[4680]: I0126 17:47:56.169786 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:47:56 crc kubenswrapper[4680]: E0126 17:47:56.170463 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:48:07 crc kubenswrapper[4680]: I0126 17:48:07.169832 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:48:07 crc kubenswrapper[4680]: E0126 17:48:07.170663 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:48:21 crc kubenswrapper[4680]: I0126 17:48:21.169697 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:48:21 crc kubenswrapper[4680]: E0126 17:48:21.170482 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:48:34 crc kubenswrapper[4680]: I0126 17:48:34.169820 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:48:34 crc kubenswrapper[4680]: E0126 17:48:34.170850 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:48:47 crc kubenswrapper[4680]: I0126 17:48:47.170754 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:48:47 crc kubenswrapper[4680]: E0126 17:48:47.171635 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:49:01 crc kubenswrapper[4680]: I0126 17:49:01.174792 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:49:01 crc kubenswrapper[4680]: E0126 17:49:01.176360 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:49:13 crc kubenswrapper[4680]: I0126 17:49:13.171480 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:49:13 crc kubenswrapper[4680]: E0126 17:49:13.173351 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:49:27 crc kubenswrapper[4680]: I0126 17:49:27.171184 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:49:27 crc kubenswrapper[4680]: E0126 17:49:27.172430 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:49:42 crc kubenswrapper[4680]: I0126 17:49:42.169216 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:49:42 crc kubenswrapper[4680]: E0126 17:49:42.171019 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:49:56 crc kubenswrapper[4680]: I0126 17:49:56.169812 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:49:56 crc kubenswrapper[4680]: E0126 17:49:56.170633 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:50:08 crc kubenswrapper[4680]: I0126 17:50:08.169284 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:50:08 crc kubenswrapper[4680]: E0126 17:50:08.169970 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:50:23 crc kubenswrapper[4680]: I0126 17:50:23.170641 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:50:23 crc kubenswrapper[4680]: I0126 17:50:23.672466 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"5c823779dc7df06424d3fd771945ddfbcb5f655f27889dc11e35e972c94588e9"} Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.312267 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7bcq4"] Jan 26 17:51:27 crc kubenswrapper[4680]: E0126 17:51:27.313147 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c59e01a-f156-45c4-bdfe-1e1abaabaf84" containerName="collect-profiles" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.313160 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c59e01a-f156-45c4-bdfe-1e1abaabaf84" containerName="collect-profiles" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.313379 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c59e01a-f156-45c4-bdfe-1e1abaabaf84" containerName="collect-profiles" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.314695 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.334410 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7bcq4"] Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.468452 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-catalog-content\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.468992 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4g8\" (UniqueName: \"kubernetes.io/projected/95b64322-e296-44ab-89b2-422618407e0b-kube-api-access-wr4g8\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.469165 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-utilities\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.571031 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-catalog-content\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.571105 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4g8\" (UniqueName: \"kubernetes.io/projected/95b64322-e296-44ab-89b2-422618407e0b-kube-api-access-wr4g8\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.571200 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-utilities\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.571619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-utilities\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.571621 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-catalog-content\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.596315 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4g8\" (UniqueName: \"kubernetes.io/projected/95b64322-e296-44ab-89b2-422618407e0b-kube-api-access-wr4g8\") pod \"redhat-marketplace-7bcq4\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:27 crc kubenswrapper[4680]: I0126 17:51:27.634778 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:28 crc kubenswrapper[4680]: I0126 17:51:28.191190 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7bcq4"] Jan 26 17:51:28 crc kubenswrapper[4680]: I0126 17:51:28.219918 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerStarted","Data":"86e0c945b2f009798c199b0cf1ae9dec26364fabaa3d25cea537a72f16b5d4b8"} Jan 26 17:51:29 crc kubenswrapper[4680]: I0126 17:51:29.232202 4680 generic.go:334] "Generic (PLEG): container finished" podID="95b64322-e296-44ab-89b2-422618407e0b" containerID="5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e" exitCode=0 Jan 26 17:51:29 crc kubenswrapper[4680]: I0126 17:51:29.232554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerDied","Data":"5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e"} Jan 26 17:51:29 crc kubenswrapper[4680]: I0126 17:51:29.234823 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:51:31 crc kubenswrapper[4680]: I0126 17:51:31.259640 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerStarted","Data":"16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69"} Jan 26 17:51:32 crc kubenswrapper[4680]: I0126 17:51:32.271606 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerDied","Data":"16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69"} Jan 26 17:51:32 crc kubenswrapper[4680]: I0126 17:51:32.271532 4680 generic.go:334] "Generic (PLEG): container finished" podID="95b64322-e296-44ab-89b2-422618407e0b" containerID="16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69" exitCode=0 Jan 26 17:51:33 crc kubenswrapper[4680]: I0126 17:51:33.282612 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerStarted","Data":"a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece"} Jan 26 17:51:33 crc kubenswrapper[4680]: I0126 17:51:33.321122 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7bcq4" podStartSLOduration=2.778430029 podStartE2EDuration="6.321005201s" podCreationTimestamp="2026-01-26 17:51:27 +0000 UTC" firstStartedPulling="2026-01-26 17:51:29.23440445 +0000 UTC m=+6364.395676719" lastFinishedPulling="2026-01-26 17:51:32.776979622 +0000 UTC m=+6367.938251891" observedRunningTime="2026-01-26 17:51:33.318444689 +0000 UTC m=+6368.479716968" watchObservedRunningTime="2026-01-26 17:51:33.321005201 +0000 UTC m=+6368.482277480" Jan 26 17:51:37 crc kubenswrapper[4680]: I0126 17:51:37.635555 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:37 crc kubenswrapper[4680]: I0126 17:51:37.636121 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:37 crc kubenswrapper[4680]: I0126 17:51:37.684779 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:38 crc kubenswrapper[4680]: I0126 17:51:38.398268 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:38 crc kubenswrapper[4680]: I0126 17:51:38.449570 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7bcq4"] Jan 26 17:51:40 crc kubenswrapper[4680]: I0126 17:51:40.360500 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7bcq4" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="registry-server" containerID="cri-o://a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece" gracePeriod=2 Jan 26 17:51:40 crc kubenswrapper[4680]: I0126 17:51:40.867525 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.026519 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr4g8\" (UniqueName: \"kubernetes.io/projected/95b64322-e296-44ab-89b2-422618407e0b-kube-api-access-wr4g8\") pod \"95b64322-e296-44ab-89b2-422618407e0b\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.027032 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-catalog-content\") pod \"95b64322-e296-44ab-89b2-422618407e0b\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.027138 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-utilities\") pod \"95b64322-e296-44ab-89b2-422618407e0b\" (UID: \"95b64322-e296-44ab-89b2-422618407e0b\") " Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.028081 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-utilities" (OuterVolumeSpecName: "utilities") pod "95b64322-e296-44ab-89b2-422618407e0b" (UID: "95b64322-e296-44ab-89b2-422618407e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.033513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b64322-e296-44ab-89b2-422618407e0b-kube-api-access-wr4g8" (OuterVolumeSpecName: "kube-api-access-wr4g8") pod "95b64322-e296-44ab-89b2-422618407e0b" (UID: "95b64322-e296-44ab-89b2-422618407e0b"). InnerVolumeSpecName "kube-api-access-wr4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.054356 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95b64322-e296-44ab-89b2-422618407e0b" (UID: "95b64322-e296-44ab-89b2-422618407e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.130929 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr4g8\" (UniqueName: \"kubernetes.io/projected/95b64322-e296-44ab-89b2-422618407e0b-kube-api-access-wr4g8\") on node \"crc\" DevicePath \"\"" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.130970 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.130986 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95b64322-e296-44ab-89b2-422618407e0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.371967 4680 generic.go:334] "Generic (PLEG): container finished" podID="95b64322-e296-44ab-89b2-422618407e0b" containerID="a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece" exitCode=0 Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.372016 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerDied","Data":"a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece"} Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.372044 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7bcq4" event={"ID":"95b64322-e296-44ab-89b2-422618407e0b","Type":"ContainerDied","Data":"86e0c945b2f009798c199b0cf1ae9dec26364fabaa3d25cea537a72f16b5d4b8"} Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.372119 4680 scope.go:117] "RemoveContainer" containerID="a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.372181 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7bcq4" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.396830 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7bcq4"] Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.404521 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7bcq4"] Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.410824 4680 scope.go:117] "RemoveContainer" containerID="16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.436301 4680 scope.go:117] "RemoveContainer" containerID="5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.477956 4680 scope.go:117] "RemoveContainer" containerID="a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece" Jan 26 17:51:41 crc kubenswrapper[4680]: E0126 17:51:41.478605 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece\": container with ID starting with a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece not found: ID does not exist" containerID="a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.478676 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece"} err="failed to get container status \"a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece\": rpc error: code = NotFound desc = could not find container \"a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece\": container with ID starting with a0fbf3b3b24bae26c65a173b41e95de4d46383fb76d2ca5fc4e86ed870491ece not found: ID does not exist" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.478729 4680 scope.go:117] "RemoveContainer" containerID="16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69" Jan 26 17:51:41 crc kubenswrapper[4680]: E0126 17:51:41.479193 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69\": container with ID starting with 16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69 not found: ID does not exist" containerID="16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.479243 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69"} err="failed to get container status \"16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69\": rpc error: code = NotFound desc = could not find container \"16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69\": container with ID starting with 16f6c7a6f7d3fe003f405b27242302c901ea7be6bb7d564b021004edbab6cf69 not found: ID does not exist" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.479270 4680 scope.go:117] "RemoveContainer" containerID="5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e" Jan 26 17:51:41 crc kubenswrapper[4680]: E0126 17:51:41.479733 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e\": container with ID starting with 5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e not found: ID does not exist" containerID="5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e" Jan 26 17:51:41 crc kubenswrapper[4680]: I0126 17:51:41.479768 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e"} err="failed to get container status \"5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e\": rpc error: code = NotFound desc = could not find container \"5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e\": container with ID starting with 5b0d1cff2065d069e26653c7808a34632523dc3ea3c506cfcd7ddacf9f81cb2e not found: ID does not exist" Jan 26 17:51:43 crc kubenswrapper[4680]: I0126 17:51:43.181759 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b64322-e296-44ab-89b2-422618407e0b" path="/var/lib/kubelet/pods/95b64322-e296-44ab-89b2-422618407e0b/volumes" Jan 26 17:52:46 crc kubenswrapper[4680]: I0126 17:52:46.981550 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:52:46 crc kubenswrapper[4680]: I0126 17:52:46.982111 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:16 crc kubenswrapper[4680]: I0126 17:53:16.981408 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:53:16 crc kubenswrapper[4680]: I0126 17:53:16.981911 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:46 crc kubenswrapper[4680]: I0126 17:53:46.980577 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:53:46 crc kubenswrapper[4680]: I0126 17:53:46.981184 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:46 crc kubenswrapper[4680]: I0126 17:53:46.981240 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:53:46 crc kubenswrapper[4680]: I0126 17:53:46.982056 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c823779dc7df06424d3fd771945ddfbcb5f655f27889dc11e35e972c94588e9"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:53:46 crc kubenswrapper[4680]: I0126 17:53:46.982127 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://5c823779dc7df06424d3fd771945ddfbcb5f655f27889dc11e35e972c94588e9" gracePeriod=600 Jan 26 17:53:47 crc kubenswrapper[4680]: I0126 17:53:47.510769 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="5c823779dc7df06424d3fd771945ddfbcb5f655f27889dc11e35e972c94588e9" exitCode=0 Jan 26 17:53:47 crc kubenswrapper[4680]: I0126 17:53:47.510799 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"5c823779dc7df06424d3fd771945ddfbcb5f655f27889dc11e35e972c94588e9"} Jan 26 17:53:47 crc kubenswrapper[4680]: I0126 17:53:47.511162 4680 scope.go:117] "RemoveContainer" containerID="c87af4175a6ea45361e7ef78f3502c96faf810a95b761b638bc461171234cda5" Jan 26 17:53:48 crc kubenswrapper[4680]: I0126 17:53:48.522200 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de"} Jan 26 17:56:16 crc kubenswrapper[4680]: I0126 17:56:16.980885 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:56:16 crc kubenswrapper[4680]: I0126 17:56:16.981553 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:56:46 crc kubenswrapper[4680]: I0126 17:56:46.980995 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:56:46 crc kubenswrapper[4680]: I0126 17:56:46.981593 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:57:16 crc kubenswrapper[4680]: I0126 17:57:16.981450 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:57:16 crc kubenswrapper[4680]: I0126 17:57:16.983182 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:57:16 crc kubenswrapper[4680]: I0126 17:57:16.983271 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 17:57:16 crc kubenswrapper[4680]: I0126 17:57:16.984094 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:57:16 crc kubenswrapper[4680]: I0126 17:57:16.984183 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" gracePeriod=600 Jan 26 17:57:17 crc kubenswrapper[4680]: E0126 17:57:17.123967 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:57:17 crc kubenswrapper[4680]: I0126 17:57:17.298168 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" exitCode=0 Jan 26 17:57:17 crc kubenswrapper[4680]: I0126 17:57:17.298212 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de"} Jan 26 17:57:17 crc kubenswrapper[4680]: I0126 17:57:17.298248 4680 scope.go:117] "RemoveContainer" containerID="5c823779dc7df06424d3fd771945ddfbcb5f655f27889dc11e35e972c94588e9" Jan 26 17:57:17 crc kubenswrapper[4680]: I0126 17:57:17.299276 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:57:17 crc kubenswrapper[4680]: E0126 17:57:17.299618 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:57:28 crc kubenswrapper[4680]: I0126 17:57:28.170134 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:57:28 crc kubenswrapper[4680]: E0126 17:57:28.170943 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:57:40 crc kubenswrapper[4680]: I0126 17:57:40.169599 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:57:40 crc kubenswrapper[4680]: E0126 17:57:40.170480 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:57:52 crc kubenswrapper[4680]: I0126 17:57:52.196201 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:57:52 crc kubenswrapper[4680]: E0126 17:57:52.197200 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:58:06 crc kubenswrapper[4680]: I0126 17:58:06.169865 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:58:06 crc kubenswrapper[4680]: E0126 17:58:06.171028 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:58:21 crc kubenswrapper[4680]: I0126 17:58:21.170311 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:58:21 crc kubenswrapper[4680]: E0126 17:58:21.171839 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:58:34 crc kubenswrapper[4680]: I0126 17:58:34.169769 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:58:34 crc kubenswrapper[4680]: E0126 17:58:34.170653 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:58:45 crc kubenswrapper[4680]: I0126 17:58:45.169389 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:58:45 crc kubenswrapper[4680]: E0126 17:58:45.170219 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:58:59 crc kubenswrapper[4680]: I0126 17:58:59.173470 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:58:59 crc kubenswrapper[4680]: E0126 17:58:59.174877 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:59:12 crc kubenswrapper[4680]: I0126 17:59:12.170100 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:59:12 crc kubenswrapper[4680]: E0126 17:59:12.170879 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:59:26 crc kubenswrapper[4680]: I0126 17:59:26.170472 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:59:26 crc kubenswrapper[4680]: E0126 17:59:26.171452 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:59:37 crc kubenswrapper[4680]: I0126 17:59:37.170161 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:59:37 crc kubenswrapper[4680]: E0126 17:59:37.170871 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 17:59:50 crc kubenswrapper[4680]: I0126 17:59:50.171314 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 17:59:50 crc kubenswrapper[4680]: E0126 17:59:50.172461 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.154607 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf"] Jan 26 18:00:00 crc kubenswrapper[4680]: E0126 18:00:00.155602 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="registry-server" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.155616 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="registry-server" Jan 26 18:00:00 crc kubenswrapper[4680]: E0126 18:00:00.155658 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="extract-utilities" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.155664 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="extract-utilities" Jan 26 18:00:00 crc kubenswrapper[4680]: E0126 18:00:00.155677 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="extract-content" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.155683 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="extract-content" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.155877 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b64322-e296-44ab-89b2-422618407e0b" containerName="registry-server" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.156553 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.158446 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnsc7\" (UniqueName: \"kubernetes.io/projected/3afc8d03-6630-4401-bf0e-0a2346ef96d9-kube-api-access-nnsc7\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.158863 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afc8d03-6630-4401-bf0e-0a2346ef96d9-secret-volume\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.158950 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afc8d03-6630-4401-bf0e-0a2346ef96d9-config-volume\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.159501 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.159707 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.172120 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf"] Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.260978 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afc8d03-6630-4401-bf0e-0a2346ef96d9-secret-volume\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.261052 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afc8d03-6630-4401-bf0e-0a2346ef96d9-config-volume\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.261108 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnsc7\" (UniqueName: \"kubernetes.io/projected/3afc8d03-6630-4401-bf0e-0a2346ef96d9-kube-api-access-nnsc7\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.263875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afc8d03-6630-4401-bf0e-0a2346ef96d9-config-volume\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.275344 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afc8d03-6630-4401-bf0e-0a2346ef96d9-secret-volume\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.280005 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnsc7\" (UniqueName: \"kubernetes.io/projected/3afc8d03-6630-4401-bf0e-0a2346ef96d9-kube-api-access-nnsc7\") pod \"collect-profiles-29490840-fsgtf\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.482345 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:00 crc kubenswrapper[4680]: I0126 18:00:00.958906 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf"] Jan 26 18:00:01 crc kubenswrapper[4680]: I0126 18:00:01.748825 4680 generic.go:334] "Generic (PLEG): container finished" podID="3afc8d03-6630-4401-bf0e-0a2346ef96d9" containerID="bf72286a8ac3c2522efacd456a5432912a102f4b5cbf3396ca48680357088423" exitCode=0 Jan 26 18:00:01 crc kubenswrapper[4680]: I0126 18:00:01.748881 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" event={"ID":"3afc8d03-6630-4401-bf0e-0a2346ef96d9","Type":"ContainerDied","Data":"bf72286a8ac3c2522efacd456a5432912a102f4b5cbf3396ca48680357088423"} Jan 26 18:00:01 crc kubenswrapper[4680]: I0126 18:00:01.749128 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" event={"ID":"3afc8d03-6630-4401-bf0e-0a2346ef96d9","Type":"ContainerStarted","Data":"ec503a3beaac848b0723e9a1972ffb681671d5c6485ed60fb1b29e3af7414539"} Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.142961 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.317791 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnsc7\" (UniqueName: \"kubernetes.io/projected/3afc8d03-6630-4401-bf0e-0a2346ef96d9-kube-api-access-nnsc7\") pod \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.317856 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afc8d03-6630-4401-bf0e-0a2346ef96d9-secret-volume\") pod \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.317894 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afc8d03-6630-4401-bf0e-0a2346ef96d9-config-volume\") pod \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\" (UID: \"3afc8d03-6630-4401-bf0e-0a2346ef96d9\") " Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.318633 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afc8d03-6630-4401-bf0e-0a2346ef96d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "3afc8d03-6630-4401-bf0e-0a2346ef96d9" (UID: "3afc8d03-6630-4401-bf0e-0a2346ef96d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.319432 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afc8d03-6630-4401-bf0e-0a2346ef96d9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.325564 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afc8d03-6630-4401-bf0e-0a2346ef96d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3afc8d03-6630-4401-bf0e-0a2346ef96d9" (UID: "3afc8d03-6630-4401-bf0e-0a2346ef96d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.329314 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afc8d03-6630-4401-bf0e-0a2346ef96d9-kube-api-access-nnsc7" (OuterVolumeSpecName: "kube-api-access-nnsc7") pod "3afc8d03-6630-4401-bf0e-0a2346ef96d9" (UID: "3afc8d03-6630-4401-bf0e-0a2346ef96d9"). InnerVolumeSpecName "kube-api-access-nnsc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.421269 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnsc7\" (UniqueName: \"kubernetes.io/projected/3afc8d03-6630-4401-bf0e-0a2346ef96d9-kube-api-access-nnsc7\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.421308 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afc8d03-6630-4401-bf0e-0a2346ef96d9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.768742 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" event={"ID":"3afc8d03-6630-4401-bf0e-0a2346ef96d9","Type":"ContainerDied","Data":"ec503a3beaac848b0723e9a1972ffb681671d5c6485ed60fb1b29e3af7414539"} Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.768783 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec503a3beaac848b0723e9a1972ffb681671d5c6485ed60fb1b29e3af7414539" Jan 26 18:00:03 crc kubenswrapper[4680]: I0126 18:00:03.768840 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-fsgtf" Jan 26 18:00:04 crc kubenswrapper[4680]: I0126 18:00:04.170292 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:00:04 crc kubenswrapper[4680]: E0126 18:00:04.170886 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:00:04 crc kubenswrapper[4680]: I0126 18:00:04.216297 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn"] Jan 26 18:00:04 crc kubenswrapper[4680]: I0126 18:00:04.223989 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-dnjqn"] Jan 26 18:00:05 crc kubenswrapper[4680]: I0126 18:00:05.181521 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487cee21-a41e-45a1-a79e-8335c55fcdf1" path="/var/lib/kubelet/pods/487cee21-a41e-45a1-a79e-8335c55fcdf1/volumes" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.374216 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pgwb6"] Jan 26 18:00:09 crc kubenswrapper[4680]: E0126 18:00:09.376506 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afc8d03-6630-4401-bf0e-0a2346ef96d9" containerName="collect-profiles" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.376588 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afc8d03-6630-4401-bf0e-0a2346ef96d9" containerName="collect-profiles" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.376846 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afc8d03-6630-4401-bf0e-0a2346ef96d9" containerName="collect-profiles" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.378348 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.394547 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgwb6"] Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.435562 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-utilities\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.435934 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllwr\" (UniqueName: \"kubernetes.io/projected/7be41f10-7361-4fad-a07d-f1faf16c4a1d-kube-api-access-wllwr\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.436000 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-catalog-content\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.538451 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllwr\" (UniqueName: \"kubernetes.io/projected/7be41f10-7361-4fad-a07d-f1faf16c4a1d-kube-api-access-wllwr\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.538589 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-catalog-content\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.538625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-utilities\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.539257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-utilities\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.539722 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-catalog-content\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.574150 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllwr\" (UniqueName: \"kubernetes.io/projected/7be41f10-7361-4fad-a07d-f1faf16c4a1d-kube-api-access-wllwr\") pod \"redhat-operators-pgwb6\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.712494 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.986093 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zrqgd"] Jan 26 18:00:09 crc kubenswrapper[4680]: I0126 18:00:09.996836 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.004238 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrqgd"] Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.064350 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-catalog-content\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.064416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-utilities\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.064571 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tz7c\" (UniqueName: \"kubernetes.io/projected/a40cd913-49c0-409f-8ea4-80b1abcac149-kube-api-access-4tz7c\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.089874 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgwb6"] Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.193731 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tz7c\" (UniqueName: \"kubernetes.io/projected/a40cd913-49c0-409f-8ea4-80b1abcac149-kube-api-access-4tz7c\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.194167 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-catalog-content\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.194282 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-utilities\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.194805 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-utilities\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.196407 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-catalog-content\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.215122 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tz7c\" (UniqueName: \"kubernetes.io/projected/a40cd913-49c0-409f-8ea4-80b1abcac149-kube-api-access-4tz7c\") pod \"community-operators-zrqgd\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.322300 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.840614 4680 generic.go:334] "Generic (PLEG): container finished" podID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerID="2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff" exitCode=0 Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.840677 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerDied","Data":"2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff"} Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.840908 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerStarted","Data":"238bbe34b440855da28977ee345eff491e800014c62eb1795c36678247b6e901"} Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.843031 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:00:10 crc kubenswrapper[4680]: I0126 18:00:10.935413 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrqgd"] Jan 26 18:00:10 crc kubenswrapper[4680]: W0126 18:00:10.947408 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda40cd913_49c0_409f_8ea4_80b1abcac149.slice/crio-b98876c14a4fbb6e12ae53bf52da1a0006e4b9decd64c585fb48aaff6cf4e390 WatchSource:0}: Error finding container b98876c14a4fbb6e12ae53bf52da1a0006e4b9decd64c585fb48aaff6cf4e390: Status 404 returned error can't find the container with id b98876c14a4fbb6e12ae53bf52da1a0006e4b9decd64c585fb48aaff6cf4e390 Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.781249 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-79flx"] Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.783467 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.808250 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-79flx"] Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.857231 4680 generic.go:334] "Generic (PLEG): container finished" podID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerID="ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e" exitCode=0 Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.857314 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerDied","Data":"ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e"} Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.857346 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerStarted","Data":"b98876c14a4fbb6e12ae53bf52da1a0006e4b9decd64c585fb48aaff6cf4e390"} Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.861498 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerStarted","Data":"70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205"} Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.934676 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62r4h\" (UniqueName: \"kubernetes.io/projected/e161c03f-e577-483a-9ea0-d2f27dbf93ec-kube-api-access-62r4h\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.934744 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-utilities\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:11 crc kubenswrapper[4680]: I0126 18:00:11.934819 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-catalog-content\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.035856 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-catalog-content\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.035989 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62r4h\" (UniqueName: \"kubernetes.io/projected/e161c03f-e577-483a-9ea0-d2f27dbf93ec-kube-api-access-62r4h\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.036026 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-utilities\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.036359 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-catalog-content\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.036760 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-utilities\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.192527 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62r4h\" (UniqueName: \"kubernetes.io/projected/e161c03f-e577-483a-9ea0-d2f27dbf93ec-kube-api-access-62r4h\") pod \"certified-operators-79flx\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:12 crc kubenswrapper[4680]: I0126 18:00:12.426537 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:13 crc kubenswrapper[4680]: I0126 18:00:13.195969 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-79flx"] Jan 26 18:00:13 crc kubenswrapper[4680]: I0126 18:00:13.883265 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerStarted","Data":"9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16"} Jan 26 18:00:13 crc kubenswrapper[4680]: I0126 18:00:13.884841 4680 generic.go:334] "Generic (PLEG): container finished" podID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerID="081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27" exitCode=0 Jan 26 18:00:13 crc kubenswrapper[4680]: I0126 18:00:13.884874 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerDied","Data":"081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27"} Jan 26 18:00:13 crc kubenswrapper[4680]: I0126 18:00:13.884896 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerStarted","Data":"cc24aed6fca7668f2ee78380741f06d2e4195add3a7b445a56d5e350021901dc"} Jan 26 18:00:15 crc kubenswrapper[4680]: I0126 18:00:15.902815 4680 generic.go:334] "Generic (PLEG): container finished" podID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerID="9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16" exitCode=0 Jan 26 18:00:15 crc kubenswrapper[4680]: I0126 18:00:15.902883 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerDied","Data":"9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16"} Jan 26 18:00:16 crc kubenswrapper[4680]: I0126 18:00:16.914614 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerStarted","Data":"86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f"} Jan 26 18:00:16 crc kubenswrapper[4680]: I0126 18:00:16.919413 4680 generic.go:334] "Generic (PLEG): container finished" podID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerID="70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205" exitCode=0 Jan 26 18:00:16 crc kubenswrapper[4680]: I0126 18:00:16.919477 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerDied","Data":"70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205"} Jan 26 18:00:17 crc kubenswrapper[4680]: I0126 18:00:17.930128 4680 generic.go:334] "Generic (PLEG): container finished" podID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerID="86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f" exitCode=0 Jan 26 18:00:17 crc kubenswrapper[4680]: I0126 18:00:17.930259 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerDied","Data":"86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f"} Jan 26 18:00:17 crc kubenswrapper[4680]: I0126 18:00:17.935055 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerStarted","Data":"ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4"} Jan 26 18:00:17 crc kubenswrapper[4680]: I0126 18:00:17.938887 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerStarted","Data":"6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc"} Jan 26 18:00:17 crc kubenswrapper[4680]: I0126 18:00:17.970239 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zrqgd" podStartSLOduration=4.2377839680000005 podStartE2EDuration="8.970221751s" podCreationTimestamp="2026-01-26 18:00:09 +0000 UTC" firstStartedPulling="2026-01-26 18:00:11.858860529 +0000 UTC m=+6887.020132798" lastFinishedPulling="2026-01-26 18:00:16.591298312 +0000 UTC m=+6891.752570581" observedRunningTime="2026-01-26 18:00:17.965751844 +0000 UTC m=+6893.127024113" watchObservedRunningTime="2026-01-26 18:00:17.970221751 +0000 UTC m=+6893.131494020" Jan 26 18:00:17 crc kubenswrapper[4680]: I0126 18:00:17.992066 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pgwb6" podStartSLOduration=2.144836737 podStartE2EDuration="8.992024659s" podCreationTimestamp="2026-01-26 18:00:09 +0000 UTC" firstStartedPulling="2026-01-26 18:00:10.842807061 +0000 UTC m=+6886.004079330" lastFinishedPulling="2026-01-26 18:00:17.689994983 +0000 UTC m=+6892.851267252" observedRunningTime="2026-01-26 18:00:17.985724951 +0000 UTC m=+6893.146997220" watchObservedRunningTime="2026-01-26 18:00:17.992024659 +0000 UTC m=+6893.153296928" Jan 26 18:00:18 crc kubenswrapper[4680]: I0126 18:00:18.169450 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:00:18 crc kubenswrapper[4680]: E0126 18:00:18.169723 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:00:18 crc kubenswrapper[4680]: I0126 18:00:18.953092 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerStarted","Data":"57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a"} Jan 26 18:00:18 crc kubenswrapper[4680]: I0126 18:00:18.972434 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-79flx" podStartSLOduration=4.555012811 podStartE2EDuration="7.972414286s" podCreationTimestamp="2026-01-26 18:00:11 +0000 UTC" firstStartedPulling="2026-01-26 18:00:14.99299901 +0000 UTC m=+6890.154271279" lastFinishedPulling="2026-01-26 18:00:18.410400485 +0000 UTC m=+6893.571672754" observedRunningTime="2026-01-26 18:00:18.969825742 +0000 UTC m=+6894.131098011" watchObservedRunningTime="2026-01-26 18:00:18.972414286 +0000 UTC m=+6894.133686555" Jan 26 18:00:19 crc kubenswrapper[4680]: I0126 18:00:19.712929 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:19 crc kubenswrapper[4680]: I0126 18:00:19.713220 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:20 crc kubenswrapper[4680]: I0126 18:00:20.323214 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:20 crc kubenswrapper[4680]: I0126 18:00:20.323308 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:20 crc kubenswrapper[4680]: I0126 18:00:20.770183 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pgwb6" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="registry-server" probeResult="failure" output=< Jan 26 18:00:20 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:00:20 crc kubenswrapper[4680]: > Jan 26 18:00:21 crc kubenswrapper[4680]: I0126 18:00:21.376025 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zrqgd" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="registry-server" probeResult="failure" output=< Jan 26 18:00:21 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:00:21 crc kubenswrapper[4680]: > Jan 26 18:00:22 crc kubenswrapper[4680]: I0126 18:00:22.431799 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:22 crc kubenswrapper[4680]: I0126 18:00:22.436341 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:23 crc kubenswrapper[4680]: I0126 18:00:23.486271 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-79flx" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="registry-server" probeResult="failure" output=< Jan 26 18:00:23 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:00:23 crc kubenswrapper[4680]: > Jan 26 18:00:30 crc kubenswrapper[4680]: I0126 18:00:30.373842 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:30 crc kubenswrapper[4680]: I0126 18:00:30.430063 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:30 crc kubenswrapper[4680]: I0126 18:00:30.608042 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrqgd"] Jan 26 18:00:30 crc kubenswrapper[4680]: I0126 18:00:30.759148 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pgwb6" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="registry-server" probeResult="failure" output=< Jan 26 18:00:30 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:00:30 crc kubenswrapper[4680]: > Jan 26 18:00:31 crc kubenswrapper[4680]: I0126 18:00:31.170138 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:00:31 crc kubenswrapper[4680]: E0126 18:00:31.170606 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:00:32 crc kubenswrapper[4680]: I0126 18:00:32.072731 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zrqgd" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="registry-server" containerID="cri-o://6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc" gracePeriod=2 Jan 26 18:00:32 crc kubenswrapper[4680]: I0126 18:00:32.483480 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:32 crc kubenswrapper[4680]: I0126 18:00:32.551042 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:32 crc kubenswrapper[4680]: I0126 18:00:32.905699 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.013651 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-79flx"] Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.045323 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-catalog-content\") pod \"a40cd913-49c0-409f-8ea4-80b1abcac149\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.045369 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tz7c\" (UniqueName: \"kubernetes.io/projected/a40cd913-49c0-409f-8ea4-80b1abcac149-kube-api-access-4tz7c\") pod \"a40cd913-49c0-409f-8ea4-80b1abcac149\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.045446 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-utilities\") pod \"a40cd913-49c0-409f-8ea4-80b1abcac149\" (UID: \"a40cd913-49c0-409f-8ea4-80b1abcac149\") " Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.046037 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-utilities" (OuterVolumeSpecName: "utilities") pod "a40cd913-49c0-409f-8ea4-80b1abcac149" (UID: "a40cd913-49c0-409f-8ea4-80b1abcac149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.055588 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40cd913-49c0-409f-8ea4-80b1abcac149-kube-api-access-4tz7c" (OuterVolumeSpecName: "kube-api-access-4tz7c") pod "a40cd913-49c0-409f-8ea4-80b1abcac149" (UID: "a40cd913-49c0-409f-8ea4-80b1abcac149"). InnerVolumeSpecName "kube-api-access-4tz7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.093060 4680 generic.go:334] "Generic (PLEG): container finished" podID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerID="6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc" exitCode=0 Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.093919 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqgd" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.094406 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerDied","Data":"6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc"} Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.094447 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqgd" event={"ID":"a40cd913-49c0-409f-8ea4-80b1abcac149","Type":"ContainerDied","Data":"b98876c14a4fbb6e12ae53bf52da1a0006e4b9decd64c585fb48aaff6cf4e390"} Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.094472 4680 scope.go:117] "RemoveContainer" containerID="6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.105490 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a40cd913-49c0-409f-8ea4-80b1abcac149" (UID: "a40cd913-49c0-409f-8ea4-80b1abcac149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.147875 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.147910 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tz7c\" (UniqueName: \"kubernetes.io/projected/a40cd913-49c0-409f-8ea4-80b1abcac149-kube-api-access-4tz7c\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.147923 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40cd913-49c0-409f-8ea4-80b1abcac149-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.154302 4680 scope.go:117] "RemoveContainer" containerID="9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.210306 4680 scope.go:117] "RemoveContainer" containerID="ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.246852 4680 scope.go:117] "RemoveContainer" containerID="6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc" Jan 26 18:00:33 crc kubenswrapper[4680]: E0126 18:00:33.251834 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc\": container with ID starting with 6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc not found: ID does not exist" containerID="6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.251883 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc"} err="failed to get container status \"6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc\": rpc error: code = NotFound desc = could not find container \"6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc\": container with ID starting with 6d465404e3048566e9dc14a9e0e3b0c44c69a35c05d93a05cff7fdb67a3771dc not found: ID does not exist" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.251913 4680 scope.go:117] "RemoveContainer" containerID="9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16" Jan 26 18:00:33 crc kubenswrapper[4680]: E0126 18:00:33.252553 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16\": container with ID starting with 9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16 not found: ID does not exist" containerID="9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.252580 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16"} err="failed to get container status \"9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16\": rpc error: code = NotFound desc = could not find container \"9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16\": container with ID starting with 9823598b88442ff235e6192797a73ea0ff0781145f1fcc15acc0465eaa53ef16 not found: ID does not exist" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.252596 4680 scope.go:117] "RemoveContainer" containerID="ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e" Jan 26 18:00:33 crc kubenswrapper[4680]: E0126 18:00:33.253001 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e\": container with ID starting with ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e not found: ID does not exist" containerID="ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.253030 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e"} err="failed to get container status \"ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e\": rpc error: code = NotFound desc = could not find container \"ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e\": container with ID starting with ff87cb9724e3debeaa28193be50946e09d8e8d5092d171fa3c8de75ea71c026e not found: ID does not exist" Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.427128 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrqgd"] Jan 26 18:00:33 crc kubenswrapper[4680]: I0126 18:00:33.440619 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zrqgd"] Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.104436 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-79flx" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="registry-server" containerID="cri-o://57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a" gracePeriod=2 Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.714834 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.887743 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62r4h\" (UniqueName: \"kubernetes.io/projected/e161c03f-e577-483a-9ea0-d2f27dbf93ec-kube-api-access-62r4h\") pod \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.887998 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-utilities\") pod \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.888224 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-catalog-content\") pod \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\" (UID: \"e161c03f-e577-483a-9ea0-d2f27dbf93ec\") " Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.888513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-utilities" (OuterVolumeSpecName: "utilities") pod "e161c03f-e577-483a-9ea0-d2f27dbf93ec" (UID: "e161c03f-e577-483a-9ea0-d2f27dbf93ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.889045 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.898424 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e161c03f-e577-483a-9ea0-d2f27dbf93ec-kube-api-access-62r4h" (OuterVolumeSpecName: "kube-api-access-62r4h") pod "e161c03f-e577-483a-9ea0-d2f27dbf93ec" (UID: "e161c03f-e577-483a-9ea0-d2f27dbf93ec"). InnerVolumeSpecName "kube-api-access-62r4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.950636 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e161c03f-e577-483a-9ea0-d2f27dbf93ec" (UID: "e161c03f-e577-483a-9ea0-d2f27dbf93ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.990801 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62r4h\" (UniqueName: \"kubernetes.io/projected/e161c03f-e577-483a-9ea0-d2f27dbf93ec-kube-api-access-62r4h\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:34 crc kubenswrapper[4680]: I0126 18:00:34.990855 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e161c03f-e577-483a-9ea0-d2f27dbf93ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.115327 4680 generic.go:334] "Generic (PLEG): container finished" podID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerID="57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a" exitCode=0 Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.115402 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerDied","Data":"57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a"} Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.115451 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79flx" event={"ID":"e161c03f-e577-483a-9ea0-d2f27dbf93ec","Type":"ContainerDied","Data":"cc24aed6fca7668f2ee78380741f06d2e4195add3a7b445a56d5e350021901dc"} Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.115476 4680 scope.go:117] "RemoveContainer" containerID="57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.116275 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79flx" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.149300 4680 scope.go:117] "RemoveContainer" containerID="86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.158143 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-79flx"] Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.180542 4680 scope.go:117] "RemoveContainer" containerID="081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.227286 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" path="/var/lib/kubelet/pods/a40cd913-49c0-409f-8ea4-80b1abcac149/volumes" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.230868 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-79flx"] Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.236796 4680 scope.go:117] "RemoveContainer" containerID="57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a" Jan 26 18:00:35 crc kubenswrapper[4680]: E0126 18:00:35.237447 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a\": container with ID starting with 57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a not found: ID does not exist" containerID="57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.237546 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a"} err="failed to get container status \"57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a\": rpc error: code = NotFound desc = could not find container \"57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a\": container with ID starting with 57099bf92d6cf6861236f24fef2b44f70e216149ba2f1848d4cac94a0dde558a not found: ID does not exist" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.237621 4680 scope.go:117] "RemoveContainer" containerID="86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f" Jan 26 18:00:35 crc kubenswrapper[4680]: E0126 18:00:35.238391 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f\": container with ID starting with 86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f not found: ID does not exist" containerID="86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.238480 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f"} err="failed to get container status \"86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f\": rpc error: code = NotFound desc = could not find container \"86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f\": container with ID starting with 86c17a759cbdaa1cf559cbb07e5dce6a88dbd5f6a22cba4179ba1b57bf4cbf2f not found: ID does not exist" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.238598 4680 scope.go:117] "RemoveContainer" containerID="081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27" Jan 26 18:00:35 crc kubenswrapper[4680]: E0126 18:00:35.238927 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27\": container with ID starting with 081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27 not found: ID does not exist" containerID="081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27" Jan 26 18:00:35 crc kubenswrapper[4680]: I0126 18:00:35.239031 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27"} err="failed to get container status \"081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27\": rpc error: code = NotFound desc = could not find container \"081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27\": container with ID starting with 081ea1231b337cf067fc7e22a02dc30ebdabca17deb8c6a6a1b09f26171b4e27 not found: ID does not exist" Jan 26 18:00:37 crc kubenswrapper[4680]: I0126 18:00:37.182119 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" path="/var/lib/kubelet/pods/e161c03f-e577-483a-9ea0-d2f27dbf93ec/volumes" Jan 26 18:00:39 crc kubenswrapper[4680]: I0126 18:00:39.764361 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:39 crc kubenswrapper[4680]: I0126 18:00:39.836121 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:40 crc kubenswrapper[4680]: I0126 18:00:40.413351 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pgwb6"] Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.164475 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pgwb6" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="registry-server" containerID="cri-o://ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4" gracePeriod=2 Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.652764 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.743709 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-utilities\") pod \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.743888 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-catalog-content\") pod \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.744654 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-utilities" (OuterVolumeSpecName: "utilities") pod "7be41f10-7361-4fad-a07d-f1faf16c4a1d" (UID: "7be41f10-7361-4fad-a07d-f1faf16c4a1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.746417 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wllwr\" (UniqueName: \"kubernetes.io/projected/7be41f10-7361-4fad-a07d-f1faf16c4a1d-kube-api-access-wllwr\") pod \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\" (UID: \"7be41f10-7361-4fad-a07d-f1faf16c4a1d\") " Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.747288 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.765641 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be41f10-7361-4fad-a07d-f1faf16c4a1d-kube-api-access-wllwr" (OuterVolumeSpecName: "kube-api-access-wllwr") pod "7be41f10-7361-4fad-a07d-f1faf16c4a1d" (UID: "7be41f10-7361-4fad-a07d-f1faf16c4a1d"). InnerVolumeSpecName "kube-api-access-wllwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.848820 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wllwr\" (UniqueName: \"kubernetes.io/projected/7be41f10-7361-4fad-a07d-f1faf16c4a1d-kube-api-access-wllwr\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.864937 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7be41f10-7361-4fad-a07d-f1faf16c4a1d" (UID: "7be41f10-7361-4fad-a07d-f1faf16c4a1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:41 crc kubenswrapper[4680]: I0126 18:00:41.950959 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be41f10-7361-4fad-a07d-f1faf16c4a1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.175651 4680 generic.go:334] "Generic (PLEG): container finished" podID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerID="ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4" exitCode=0 Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.175725 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgwb6" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.175750 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerDied","Data":"ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4"} Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.176999 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgwb6" event={"ID":"7be41f10-7361-4fad-a07d-f1faf16c4a1d","Type":"ContainerDied","Data":"238bbe34b440855da28977ee345eff491e800014c62eb1795c36678247b6e901"} Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.177029 4680 scope.go:117] "RemoveContainer" containerID="ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.213313 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pgwb6"] Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.214681 4680 scope.go:117] "RemoveContainer" containerID="70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.226143 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pgwb6"] Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.231806 4680 scope.go:117] "RemoveContainer" containerID="2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.278530 4680 scope.go:117] "RemoveContainer" containerID="ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4" Jan 26 18:00:42 crc kubenswrapper[4680]: E0126 18:00:42.279149 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4\": container with ID starting with ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4 not found: ID does not exist" containerID="ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.279185 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4"} err="failed to get container status \"ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4\": rpc error: code = NotFound desc = could not find container \"ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4\": container with ID starting with ad9ef43595336c56867944651535ac24bda06147a7dba02f5c38ced1b1e463d4 not found: ID does not exist" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.279218 4680 scope.go:117] "RemoveContainer" containerID="70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205" Jan 26 18:00:42 crc kubenswrapper[4680]: E0126 18:00:42.279608 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205\": container with ID starting with 70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205 not found: ID does not exist" containerID="70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.279636 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205"} err="failed to get container status \"70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205\": rpc error: code = NotFound desc = could not find container \"70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205\": container with ID starting with 70d4b4092630961c2901f5250abf85277f7bdad06e034c14b058ff6f5b945205 not found: ID does not exist" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.279651 4680 scope.go:117] "RemoveContainer" containerID="2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff" Jan 26 18:00:42 crc kubenswrapper[4680]: E0126 18:00:42.280367 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff\": container with ID starting with 2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff not found: ID does not exist" containerID="2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff" Jan 26 18:00:42 crc kubenswrapper[4680]: I0126 18:00:42.280395 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff"} err="failed to get container status \"2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff\": rpc error: code = NotFound desc = could not find container \"2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff\": container with ID starting with 2b5b332768a06e5c5c298a818a5b2b3b887a1ba831b94de1150bb1e89d6668ff not found: ID does not exist" Jan 26 18:00:43 crc kubenswrapper[4680]: I0126 18:00:43.181285 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" path="/var/lib/kubelet/pods/7be41f10-7361-4fad-a07d-f1faf16c4a1d/volumes" Jan 26 18:00:44 crc kubenswrapper[4680]: I0126 18:00:44.169595 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:00:44 crc kubenswrapper[4680]: E0126 18:00:44.170220 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:00:47 crc kubenswrapper[4680]: I0126 18:00:47.858860 4680 scope.go:117] "RemoveContainer" containerID="df71e1dfc51f65f6a2ff2b3651a20549d32529e9973236f52bd60e44dac02023" Jan 26 18:00:58 crc kubenswrapper[4680]: I0126 18:00:58.170958 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:00:58 crc kubenswrapper[4680]: E0126 18:00:58.173617 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.175177 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490841-7j6ms"] Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.179976 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="extract-content" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180007 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="extract-content" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180022 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="extract-content" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180029 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="extract-content" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180042 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180048 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180059 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180079 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180093 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180100 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180113 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="extract-content" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180118 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="extract-content" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180135 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="extract-utilities" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180141 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="extract-utilities" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180159 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="extract-utilities" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180164 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="extract-utilities" Jan 26 18:01:00 crc kubenswrapper[4680]: E0126 18:01:00.180173 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="extract-utilities" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180180 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="extract-utilities" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180426 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e161c03f-e577-483a-9ea0-d2f27dbf93ec" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180445 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be41f10-7361-4fad-a07d-f1faf16c4a1d" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.180459 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40cd913-49c0-409f-8ea4-80b1abcac149" containerName="registry-server" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.181103 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.191413 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490841-7j6ms"] Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.254416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmsk\" (UniqueName: \"kubernetes.io/projected/4158def9-42b0-433e-a63b-473ec4a962f7-kube-api-access-lnmsk\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.254501 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-fernet-keys\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.254650 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-combined-ca-bundle\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.254687 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-config-data\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.356360 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-combined-ca-bundle\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.356461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-config-data\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.356582 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnmsk\" (UniqueName: \"kubernetes.io/projected/4158def9-42b0-433e-a63b-473ec4a962f7-kube-api-access-lnmsk\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.356624 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-fernet-keys\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.368153 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-combined-ca-bundle\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.368454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-config-data\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.375771 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-fernet-keys\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.377210 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnmsk\" (UniqueName: \"kubernetes.io/projected/4158def9-42b0-433e-a63b-473ec4a962f7-kube-api-access-lnmsk\") pod \"keystone-cron-29490841-7j6ms\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.506957 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:00 crc kubenswrapper[4680]: I0126 18:01:00.972815 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490841-7j6ms"] Jan 26 18:01:01 crc kubenswrapper[4680]: W0126 18:01:00.999937 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4158def9_42b0_433e_a63b_473ec4a962f7.slice/crio-4e6429c0522a6c512291e3418ce79cdfb69a09a8deeb5770a2d4e336933d592b WatchSource:0}: Error finding container 4e6429c0522a6c512291e3418ce79cdfb69a09a8deeb5770a2d4e336933d592b: Status 404 returned error can't find the container with id 4e6429c0522a6c512291e3418ce79cdfb69a09a8deeb5770a2d4e336933d592b Jan 26 18:01:01 crc kubenswrapper[4680]: I0126 18:01:01.385370 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490841-7j6ms" event={"ID":"4158def9-42b0-433e-a63b-473ec4a962f7","Type":"ContainerStarted","Data":"9d6dd97f1f2307994dc7fbef500f5e688fcf13af64a48a72958be862a05fd02e"} Jan 26 18:01:01 crc kubenswrapper[4680]: I0126 18:01:01.385752 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490841-7j6ms" event={"ID":"4158def9-42b0-433e-a63b-473ec4a962f7","Type":"ContainerStarted","Data":"4e6429c0522a6c512291e3418ce79cdfb69a09a8deeb5770a2d4e336933d592b"} Jan 26 18:01:01 crc kubenswrapper[4680]: I0126 18:01:01.423496 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490841-7j6ms" podStartSLOduration=1.423474792 podStartE2EDuration="1.423474792s" podCreationTimestamp="2026-01-26 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:01:01.418086899 +0000 UTC m=+6936.579359188" watchObservedRunningTime="2026-01-26 18:01:01.423474792 +0000 UTC m=+6936.584747061" Jan 26 18:01:05 crc kubenswrapper[4680]: I0126 18:01:05.421699 4680 generic.go:334] "Generic (PLEG): container finished" podID="4158def9-42b0-433e-a63b-473ec4a962f7" containerID="9d6dd97f1f2307994dc7fbef500f5e688fcf13af64a48a72958be862a05fd02e" exitCode=0 Jan 26 18:01:05 crc kubenswrapper[4680]: I0126 18:01:05.421938 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490841-7j6ms" event={"ID":"4158def9-42b0-433e-a63b-473ec4a962f7","Type":"ContainerDied","Data":"9d6dd97f1f2307994dc7fbef500f5e688fcf13af64a48a72958be862a05fd02e"} Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.004791 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.126041 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-fernet-keys\") pod \"4158def9-42b0-433e-a63b-473ec4a962f7\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.126113 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnmsk\" (UniqueName: \"kubernetes.io/projected/4158def9-42b0-433e-a63b-473ec4a962f7-kube-api-access-lnmsk\") pod \"4158def9-42b0-433e-a63b-473ec4a962f7\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.126232 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-combined-ca-bundle\") pod \"4158def9-42b0-433e-a63b-473ec4a962f7\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.126319 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-config-data\") pod \"4158def9-42b0-433e-a63b-473ec4a962f7\" (UID: \"4158def9-42b0-433e-a63b-473ec4a962f7\") " Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.134094 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4158def9-42b0-433e-a63b-473ec4a962f7-kube-api-access-lnmsk" (OuterVolumeSpecName: "kube-api-access-lnmsk") pod "4158def9-42b0-433e-a63b-473ec4a962f7" (UID: "4158def9-42b0-433e-a63b-473ec4a962f7"). InnerVolumeSpecName "kube-api-access-lnmsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.145457 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4158def9-42b0-433e-a63b-473ec4a962f7" (UID: "4158def9-42b0-433e-a63b-473ec4a962f7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.164228 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4158def9-42b0-433e-a63b-473ec4a962f7" (UID: "4158def9-42b0-433e-a63b-473ec4a962f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.198867 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-config-data" (OuterVolumeSpecName: "config-data") pod "4158def9-42b0-433e-a63b-473ec4a962f7" (UID: "4158def9-42b0-433e-a63b-473ec4a962f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.228669 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.228710 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.228723 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4158def9-42b0-433e-a63b-473ec4a962f7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.228735 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnmsk\" (UniqueName: \"kubernetes.io/projected/4158def9-42b0-433e-a63b-473ec4a962f7-kube-api-access-lnmsk\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.441344 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490841-7j6ms" event={"ID":"4158def9-42b0-433e-a63b-473ec4a962f7","Type":"ContainerDied","Data":"4e6429c0522a6c512291e3418ce79cdfb69a09a8deeb5770a2d4e336933d592b"} Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.441389 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6429c0522a6c512291e3418ce79cdfb69a09a8deeb5770a2d4e336933d592b" Jan 26 18:01:07 crc kubenswrapper[4680]: I0126 18:01:07.441448 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490841-7j6ms" Jan 26 18:01:10 crc kubenswrapper[4680]: I0126 18:01:10.169295 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:01:10 crc kubenswrapper[4680]: E0126 18:01:10.170242 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:01:21 crc kubenswrapper[4680]: I0126 18:01:21.170386 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:01:21 crc kubenswrapper[4680]: E0126 18:01:21.171196 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:01:35 crc kubenswrapper[4680]: I0126 18:01:35.176911 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:01:35 crc kubenswrapper[4680]: E0126 18:01:35.177769 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:01:49 crc kubenswrapper[4680]: I0126 18:01:49.169979 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:01:49 crc kubenswrapper[4680]: E0126 18:01:49.171363 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.117528 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6m46f"] Jan 26 18:02:00 crc kubenswrapper[4680]: E0126 18:02:00.124960 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4158def9-42b0-433e-a63b-473ec4a962f7" containerName="keystone-cron" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.125041 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4158def9-42b0-433e-a63b-473ec4a962f7" containerName="keystone-cron" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.125421 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4158def9-42b0-433e-a63b-473ec4a962f7" containerName="keystone-cron" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.127319 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.144470 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m46f"] Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.169404 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-catalog-content\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.169508 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbvw\" (UniqueName: \"kubernetes.io/projected/5da5da78-abbe-4238-a82d-0ce637e698ed-kube-api-access-5mbvw\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.169718 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-utilities\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.271548 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-utilities\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.271686 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-catalog-content\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.272412 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-catalog-content\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.272467 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-utilities\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.272596 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbvw\" (UniqueName: \"kubernetes.io/projected/5da5da78-abbe-4238-a82d-0ce637e698ed-kube-api-access-5mbvw\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.296123 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbvw\" (UniqueName: \"kubernetes.io/projected/5da5da78-abbe-4238-a82d-0ce637e698ed-kube-api-access-5mbvw\") pod \"redhat-marketplace-6m46f\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:00 crc kubenswrapper[4680]: I0126 18:02:00.448550 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:01 crc kubenswrapper[4680]: I0126 18:02:01.074412 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m46f"] Jan 26 18:02:01 crc kubenswrapper[4680]: I0126 18:02:01.144606 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerStarted","Data":"174af5d6fe07d26d415514e1fc67005923534fd1f73795c16fba6bd928588fab"} Jan 26 18:02:01 crc kubenswrapper[4680]: I0126 18:02:01.170895 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:02:01 crc kubenswrapper[4680]: E0126 18:02:01.171133 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:02:02 crc kubenswrapper[4680]: I0126 18:02:02.162755 4680 generic.go:334] "Generic (PLEG): container finished" podID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerID="b1c468f3b9bd6d89bc1448eebc7ec9c88010188d037539506f5ea7acd952046f" exitCode=0 Jan 26 18:02:02 crc kubenswrapper[4680]: I0126 18:02:02.163496 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerDied","Data":"b1c468f3b9bd6d89bc1448eebc7ec9c88010188d037539506f5ea7acd952046f"} Jan 26 18:02:04 crc kubenswrapper[4680]: I0126 18:02:04.213542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerStarted","Data":"890748b8739c05b19c578b765b65e9c52009f31020fd8f90e60d215138a4050b"} Jan 26 18:02:06 crc kubenswrapper[4680]: I0126 18:02:06.233933 4680 generic.go:334] "Generic (PLEG): container finished" podID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerID="890748b8739c05b19c578b765b65e9c52009f31020fd8f90e60d215138a4050b" exitCode=0 Jan 26 18:02:06 crc kubenswrapper[4680]: I0126 18:02:06.234256 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerDied","Data":"890748b8739c05b19c578b765b65e9c52009f31020fd8f90e60d215138a4050b"} Jan 26 18:02:07 crc kubenswrapper[4680]: I0126 18:02:07.244043 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerStarted","Data":"827a0050960f578225091fac8e624028951dabbc4da2aa4c68555349ed4705db"} Jan 26 18:02:07 crc kubenswrapper[4680]: I0126 18:02:07.269466 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6m46f" podStartSLOduration=2.587592953 podStartE2EDuration="7.269444312s" podCreationTimestamp="2026-01-26 18:02:00 +0000 UTC" firstStartedPulling="2026-01-26 18:02:02.165204493 +0000 UTC m=+6997.326476762" lastFinishedPulling="2026-01-26 18:02:06.847055852 +0000 UTC m=+7002.008328121" observedRunningTime="2026-01-26 18:02:07.261183277 +0000 UTC m=+7002.422455546" watchObservedRunningTime="2026-01-26 18:02:07.269444312 +0000 UTC m=+7002.430716581" Jan 26 18:02:10 crc kubenswrapper[4680]: I0126 18:02:10.449368 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:10 crc kubenswrapper[4680]: I0126 18:02:10.449753 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:10 crc kubenswrapper[4680]: I0126 18:02:10.500852 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:16 crc kubenswrapper[4680]: I0126 18:02:16.170467 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:02:16 crc kubenswrapper[4680]: E0126 18:02:16.172910 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:02:20 crc kubenswrapper[4680]: I0126 18:02:20.503995 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:20 crc kubenswrapper[4680]: I0126 18:02:20.566512 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m46f"] Jan 26 18:02:21 crc kubenswrapper[4680]: I0126 18:02:21.374515 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6m46f" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="registry-server" containerID="cri-o://827a0050960f578225091fac8e624028951dabbc4da2aa4c68555349ed4705db" gracePeriod=2 Jan 26 18:02:22 crc kubenswrapper[4680]: I0126 18:02:22.393014 4680 generic.go:334] "Generic (PLEG): container finished" podID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerID="827a0050960f578225091fac8e624028951dabbc4da2aa4c68555349ed4705db" exitCode=0 Jan 26 18:02:22 crc kubenswrapper[4680]: I0126 18:02:22.393158 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerDied","Data":"827a0050960f578225091fac8e624028951dabbc4da2aa4c68555349ed4705db"} Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.096059 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.191894 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-catalog-content\") pod \"5da5da78-abbe-4238-a82d-0ce637e698ed\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.212880 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5da5da78-abbe-4238-a82d-0ce637e698ed" (UID: "5da5da78-abbe-4238-a82d-0ce637e698ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.293972 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbvw\" (UniqueName: \"kubernetes.io/projected/5da5da78-abbe-4238-a82d-0ce637e698ed-kube-api-access-5mbvw\") pod \"5da5da78-abbe-4238-a82d-0ce637e698ed\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.294182 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-utilities\") pod \"5da5da78-abbe-4238-a82d-0ce637e698ed\" (UID: \"5da5da78-abbe-4238-a82d-0ce637e698ed\") " Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.294584 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.295124 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-utilities" (OuterVolumeSpecName: "utilities") pod "5da5da78-abbe-4238-a82d-0ce637e698ed" (UID: "5da5da78-abbe-4238-a82d-0ce637e698ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.300893 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da5da78-abbe-4238-a82d-0ce637e698ed-kube-api-access-5mbvw" (OuterVolumeSpecName: "kube-api-access-5mbvw") pod "5da5da78-abbe-4238-a82d-0ce637e698ed" (UID: "5da5da78-abbe-4238-a82d-0ce637e698ed"). InnerVolumeSpecName "kube-api-access-5mbvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.397626 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbvw\" (UniqueName: \"kubernetes.io/projected/5da5da78-abbe-4238-a82d-0ce637e698ed-kube-api-access-5mbvw\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.397669 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da5da78-abbe-4238-a82d-0ce637e698ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.409412 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m46f" event={"ID":"5da5da78-abbe-4238-a82d-0ce637e698ed","Type":"ContainerDied","Data":"174af5d6fe07d26d415514e1fc67005923534fd1f73795c16fba6bd928588fab"} Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.409481 4680 scope.go:117] "RemoveContainer" containerID="827a0050960f578225091fac8e624028951dabbc4da2aa4c68555349ed4705db" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.409502 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m46f" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.439499 4680 scope.go:117] "RemoveContainer" containerID="890748b8739c05b19c578b765b65e9c52009f31020fd8f90e60d215138a4050b" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.455852 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m46f"] Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.481375 4680 scope.go:117] "RemoveContainer" containerID="b1c468f3b9bd6d89bc1448eebc7ec9c88010188d037539506f5ea7acd952046f" Jan 26 18:02:23 crc kubenswrapper[4680]: I0126 18:02:23.897925 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m46f"] Jan 26 18:02:25 crc kubenswrapper[4680]: I0126 18:02:25.184749 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" path="/var/lib/kubelet/pods/5da5da78-abbe-4238-a82d-0ce637e698ed/volumes" Jan 26 18:02:28 crc kubenswrapper[4680]: I0126 18:02:28.170523 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:02:29 crc kubenswrapper[4680]: I0126 18:02:29.481870 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"7cb409b103b5e31be67519b9868e2bc2ad8f3142f92b9f24bbb1599270cd1ca8"} Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.183168 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55886c9d87-ns6s7"] Jan 26 18:03:36 crc kubenswrapper[4680]: E0126 18:03:36.184270 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="extract-content" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.184289 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="extract-content" Jan 26 18:03:36 crc kubenswrapper[4680]: E0126 18:03:36.184327 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="registry-server" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.184336 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="registry-server" Jan 26 18:03:36 crc kubenswrapper[4680]: E0126 18:03:36.184367 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="extract-utilities" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.184375 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="extract-utilities" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.184650 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da5da78-abbe-4238-a82d-0ce637e698ed" containerName="registry-server" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.189616 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.229638 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55886c9d87-ns6s7"] Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.302959 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5lq\" (UniqueName: \"kubernetes.io/projected/660b5d4e-3529-470c-91cc-2acaa9245b65-kube-api-access-rl5lq\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.303034 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-combined-ca-bundle\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.303145 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-httpd-config\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.303363 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-public-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.303407 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-internal-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.303431 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-config\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.303502 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-ovndb-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.405919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5lq\" (UniqueName: \"kubernetes.io/projected/660b5d4e-3529-470c-91cc-2acaa9245b65-kube-api-access-rl5lq\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.407534 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-combined-ca-bundle\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.407724 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-httpd-config\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.407916 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-public-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.408004 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-internal-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.408197 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-config\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.408287 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-ovndb-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.417675 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-combined-ca-bundle\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.418403 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-public-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.418608 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-config\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.418984 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-httpd-config\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.419858 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-internal-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.421515 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/660b5d4e-3529-470c-91cc-2acaa9245b65-ovndb-tls-certs\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.432520 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5lq\" (UniqueName: \"kubernetes.io/projected/660b5d4e-3529-470c-91cc-2acaa9245b65-kube-api-access-rl5lq\") pod \"neutron-55886c9d87-ns6s7\" (UID: \"660b5d4e-3529-470c-91cc-2acaa9245b65\") " pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:36 crc kubenswrapper[4680]: I0126 18:03:36.511166 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:38 crc kubenswrapper[4680]: I0126 18:03:38.097009 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55886c9d87-ns6s7"] Jan 26 18:03:38 crc kubenswrapper[4680]: I0126 18:03:38.594332 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55886c9d87-ns6s7" event={"ID":"660b5d4e-3529-470c-91cc-2acaa9245b65","Type":"ContainerStarted","Data":"351b2f6cc90a65d7d2dde9924761ed541b930006a4a9ebc361c7b485a0905867"} Jan 26 18:03:38 crc kubenswrapper[4680]: I0126 18:03:38.594658 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55886c9d87-ns6s7" event={"ID":"660b5d4e-3529-470c-91cc-2acaa9245b65","Type":"ContainerStarted","Data":"b12ee8cc73f014d0a873ef5b31aa10ea8a71601f9bf9e6e448f457f199c8c126"} Jan 26 18:03:39 crc kubenswrapper[4680]: I0126 18:03:39.609696 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55886c9d87-ns6s7" event={"ID":"660b5d4e-3529-470c-91cc-2acaa9245b65","Type":"ContainerStarted","Data":"c5afab9f51d433667e7dd726aecf59823773f963d64f23a91a19cfea129bf7dc"} Jan 26 18:03:39 crc kubenswrapper[4680]: I0126 18:03:39.610511 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:03:39 crc kubenswrapper[4680]: I0126 18:03:39.635344 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55886c9d87-ns6s7" podStartSLOduration=3.635302884 podStartE2EDuration="3.635302884s" podCreationTimestamp="2026-01-26 18:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:39.627595076 +0000 UTC m=+7094.788867355" watchObservedRunningTime="2026-01-26 18:03:39.635302884 +0000 UTC m=+7094.796575163" Jan 26 18:04:06 crc kubenswrapper[4680]: I0126 18:04:06.538876 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55886c9d87-ns6s7" Jan 26 18:04:06 crc kubenswrapper[4680]: I0126 18:04:06.746456 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-867c94b99c-zl66f"] Jan 26 18:04:06 crc kubenswrapper[4680]: I0126 18:04:06.752581 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-867c94b99c-zl66f" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-api" containerID="cri-o://7a585c76fd49318bd18a06bd5ed5a5753ec4135e9e63b2f9fad5904839c52463" gracePeriod=30 Jan 26 18:04:06 crc kubenswrapper[4680]: I0126 18:04:06.753155 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-867c94b99c-zl66f" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-httpd" containerID="cri-o://3029c345b20e6aed1bff7196b1f3c3403d59164aeba3b12350394380bca106a8" gracePeriod=30 Jan 26 18:04:07 crc kubenswrapper[4680]: I0126 18:04:07.881835 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867c94b99c-zl66f" event={"ID":"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0","Type":"ContainerDied","Data":"3029c345b20e6aed1bff7196b1f3c3403d59164aeba3b12350394380bca106a8"} Jan 26 18:04:07 crc kubenswrapper[4680]: I0126 18:04:07.881993 4680 generic.go:334] "Generic (PLEG): container finished" podID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerID="3029c345b20e6aed1bff7196b1f3c3403d59164aeba3b12350394380bca106a8" exitCode=0 Jan 26 18:04:10 crc kubenswrapper[4680]: I0126 18:04:10.909683 4680 generic.go:334] "Generic (PLEG): container finished" podID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerID="7a585c76fd49318bd18a06bd5ed5a5753ec4135e9e63b2f9fad5904839c52463" exitCode=0 Jan 26 18:04:10 crc kubenswrapper[4680]: I0126 18:04:10.909771 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867c94b99c-zl66f" event={"ID":"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0","Type":"ContainerDied","Data":"7a585c76fd49318bd18a06bd5ed5a5753ec4135e9e63b2f9fad5904839c52463"} Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.731273 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867c94b99c-zl66f" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902318 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-combined-ca-bundle\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902416 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk6sv\" (UniqueName: \"kubernetes.io/projected/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-kube-api-access-dk6sv\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902485 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-internal-tls-certs\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902504 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-public-tls-certs\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902543 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-config\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902638 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-httpd-config\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.902653 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-ovndb-tls-certs\") pod \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\" (UID: \"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0\") " Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.923285 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.929486 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-kube-api-access-dk6sv" (OuterVolumeSpecName: "kube-api-access-dk6sv") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "kube-api-access-dk6sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.929725 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867c94b99c-zl66f" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.929620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867c94b99c-zl66f" event={"ID":"da8c78f6-4343-43fb-8072-bfa9aa2ba5c0","Type":"ContainerDied","Data":"3d807a9724e1aa08df42854fc585c6dab7f237c9a14070cd7eaa6b2d0fe436fd"} Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.930638 4680 scope.go:117] "RemoveContainer" containerID="3029c345b20e6aed1bff7196b1f3c3403d59164aeba3b12350394380bca106a8" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.973264 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.980667 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.993972 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4680]: I0126 18:04:11.996148 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-config" (OuterVolumeSpecName: "config") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.004638 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.004674 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk6sv\" (UniqueName: \"kubernetes.io/projected/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-kube-api-access-dk6sv\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.004690 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.004701 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.004716 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.004728 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.010307 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" (UID: "da8c78f6-4343-43fb-8072-bfa9aa2ba5c0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.106474 4680 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.129829 4680 scope.go:117] "RemoveContainer" containerID="7a585c76fd49318bd18a06bd5ed5a5753ec4135e9e63b2f9fad5904839c52463" Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.268560 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-867c94b99c-zl66f"] Jan 26 18:04:12 crc kubenswrapper[4680]: I0126 18:04:12.277784 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-867c94b99c-zl66f"] Jan 26 18:04:13 crc kubenswrapper[4680]: I0126 18:04:13.182166 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" path="/var/lib/kubelet/pods/da8c78f6-4343-43fb-8072-bfa9aa2ba5c0/volumes" Jan 26 18:04:46 crc kubenswrapper[4680]: I0126 18:04:46.981253 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:04:46 crc kubenswrapper[4680]: I0126 18:04:46.982287 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:05:16 crc kubenswrapper[4680]: I0126 18:05:16.981282 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:05:16 crc kubenswrapper[4680]: I0126 18:05:16.982128 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:05:41 crc kubenswrapper[4680]: I0126 18:05:41.994830 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-744f4bf557-dr6ng" podUID="9d9139f2-446c-49ea-9d61-f1d48df4998b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 18:05:46 crc kubenswrapper[4680]: I0126 18:05:46.981228 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:05:46 crc kubenswrapper[4680]: I0126 18:05:46.981837 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:05:46 crc kubenswrapper[4680]: I0126 18:05:46.981889 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 18:05:46 crc kubenswrapper[4680]: I0126 18:05:46.982428 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cb409b103b5e31be67519b9868e2bc2ad8f3142f92b9f24bbb1599270cd1ca8"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:05:46 crc kubenswrapper[4680]: I0126 18:05:46.982490 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://7cb409b103b5e31be67519b9868e2bc2ad8f3142f92b9f24bbb1599270cd1ca8" gracePeriod=600 Jan 26 18:05:47 crc kubenswrapper[4680]: I0126 18:05:47.848888 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="7cb409b103b5e31be67519b9868e2bc2ad8f3142f92b9f24bbb1599270cd1ca8" exitCode=0 Jan 26 18:05:47 crc kubenswrapper[4680]: I0126 18:05:47.848979 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"7cb409b103b5e31be67519b9868e2bc2ad8f3142f92b9f24bbb1599270cd1ca8"} Jan 26 18:05:47 crc kubenswrapper[4680]: I0126 18:05:47.849792 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72"} Jan 26 18:05:47 crc kubenswrapper[4680]: I0126 18:05:47.849840 4680 scope.go:117] "RemoveContainer" containerID="e998d22017cf3f32552930ab82ff22cfbcd04ceceb55a24f14d9d68b8bf267de" Jan 26 18:06:05 crc kubenswrapper[4680]: E0126 18:06:05.213474 4680 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.20:39338->38.102.83.20:45165: write tcp 38.102.83.20:39338->38.102.83.20:45165: write: broken pipe Jan 26 18:08:16 crc kubenswrapper[4680]: I0126 18:08:16.980646 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:08:16 crc kubenswrapper[4680]: I0126 18:08:16.981319 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:08:46 crc kubenswrapper[4680]: I0126 18:08:46.980960 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:08:46 crc kubenswrapper[4680]: I0126 18:08:46.981641 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:09:16 crc kubenswrapper[4680]: I0126 18:09:16.981140 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:09:16 crc kubenswrapper[4680]: I0126 18:09:16.981772 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:09:16 crc kubenswrapper[4680]: I0126 18:09:16.981850 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 18:09:16 crc kubenswrapper[4680]: I0126 18:09:16.982839 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:09:16 crc kubenswrapper[4680]: I0126 18:09:16.982894 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" gracePeriod=600 Jan 26 18:09:17 crc kubenswrapper[4680]: E0126 18:09:17.104972 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:09:17 crc kubenswrapper[4680]: I0126 18:09:17.752450 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" exitCode=0 Jan 26 18:09:17 crc kubenswrapper[4680]: I0126 18:09:17.752494 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72"} Jan 26 18:09:17 crc kubenswrapper[4680]: I0126 18:09:17.752557 4680 scope.go:117] "RemoveContainer" containerID="7cb409b103b5e31be67519b9868e2bc2ad8f3142f92b9f24bbb1599270cd1ca8" Jan 26 18:09:17 crc kubenswrapper[4680]: I0126 18:09:17.753092 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:09:17 crc kubenswrapper[4680]: E0126 18:09:17.753410 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:09:30 crc kubenswrapper[4680]: I0126 18:09:30.170286 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:09:30 crc kubenswrapper[4680]: E0126 18:09:30.171023 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:09:43 crc kubenswrapper[4680]: I0126 18:09:43.169987 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:09:43 crc kubenswrapper[4680]: E0126 18:09:43.170619 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:09:57 crc kubenswrapper[4680]: I0126 18:09:57.170138 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:09:57 crc kubenswrapper[4680]: E0126 18:09:57.172208 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.672520 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5hkk8"] Jan 26 18:10:10 crc kubenswrapper[4680]: E0126 18:10:10.673645 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-api" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.673663 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-api" Jan 26 18:10:10 crc kubenswrapper[4680]: E0126 18:10:10.673689 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-httpd" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.673696 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-httpd" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.673945 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-httpd" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.673965 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8c78f6-4343-43fb-8072-bfa9aa2ba5c0" containerName="neutron-api" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.676456 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.702805 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hkk8"] Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.839116 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvzxd\" (UniqueName: \"kubernetes.io/projected/af486d30-e630-45d1-bc94-fad641f26616-kube-api-access-hvzxd\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.839474 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-utilities\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.839797 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-catalog-content\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.942014 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-catalog-content\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.942304 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvzxd\" (UniqueName: \"kubernetes.io/projected/af486d30-e630-45d1-bc94-fad641f26616-kube-api-access-hvzxd\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.942407 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-utilities\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.943009 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-catalog-content\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.943302 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-utilities\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:10 crc kubenswrapper[4680]: I0126 18:10:10.979717 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvzxd\" (UniqueName: \"kubernetes.io/projected/af486d30-e630-45d1-bc94-fad641f26616-kube-api-access-hvzxd\") pod \"redhat-operators-5hkk8\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:11 crc kubenswrapper[4680]: I0126 18:10:11.007231 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:11 crc kubenswrapper[4680]: I0126 18:10:11.169904 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:10:11 crc kubenswrapper[4680]: E0126 18:10:11.170280 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:10:11 crc kubenswrapper[4680]: I0126 18:10:11.576967 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hkk8"] Jan 26 18:10:11 crc kubenswrapper[4680]: W0126 18:10:11.583495 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf486d30_e630_45d1_bc94_fad641f26616.slice/crio-4852e47d12a64315c34a72184537ba959a436de708755f41a7de6ab054e86368 WatchSource:0}: Error finding container 4852e47d12a64315c34a72184537ba959a436de708755f41a7de6ab054e86368: Status 404 returned error can't find the container with id 4852e47d12a64315c34a72184537ba959a436de708755f41a7de6ab054e86368 Jan 26 18:10:12 crc kubenswrapper[4680]: I0126 18:10:12.223391 4680 generic.go:334] "Generic (PLEG): container finished" podID="af486d30-e630-45d1-bc94-fad641f26616" containerID="2b368a156aca7fb69d8df4b1514897f81c16571c7735b9cb4733900bdff3c912" exitCode=0 Jan 26 18:10:12 crc kubenswrapper[4680]: I0126 18:10:12.223495 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerDied","Data":"2b368a156aca7fb69d8df4b1514897f81c16571c7735b9cb4733900bdff3c912"} Jan 26 18:10:12 crc kubenswrapper[4680]: I0126 18:10:12.223692 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerStarted","Data":"4852e47d12a64315c34a72184537ba959a436de708755f41a7de6ab054e86368"} Jan 26 18:10:12 crc kubenswrapper[4680]: I0126 18:10:12.226266 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:10:14 crc kubenswrapper[4680]: I0126 18:10:14.239824 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerStarted","Data":"9b1fc9b6eb823a42631f9efa4adfe74c2949ba2889f48e73b4725672c8c32a23"} Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.442488 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlhvm"] Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.445038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.458293 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlhvm"] Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.555631 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-catalog-content\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.556050 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-utilities\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.556269 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmjfh\" (UniqueName: \"kubernetes.io/projected/7beff252-7486-4796-a37b-1a07e600909c-kube-api-access-fmjfh\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.657903 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-catalog-content\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.657982 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-utilities\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.658164 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmjfh\" (UniqueName: \"kubernetes.io/projected/7beff252-7486-4796-a37b-1a07e600909c-kube-api-access-fmjfh\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.658390 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-catalog-content\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.658552 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-utilities\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.683234 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmjfh\" (UniqueName: \"kubernetes.io/projected/7beff252-7486-4796-a37b-1a07e600909c-kube-api-access-fmjfh\") pod \"community-operators-vlhvm\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:16 crc kubenswrapper[4680]: I0126 18:10:16.765920 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:17 crc kubenswrapper[4680]: I0126 18:10:17.265278 4680 generic.go:334] "Generic (PLEG): container finished" podID="af486d30-e630-45d1-bc94-fad641f26616" containerID="9b1fc9b6eb823a42631f9efa4adfe74c2949ba2889f48e73b4725672c8c32a23" exitCode=0 Jan 26 18:10:17 crc kubenswrapper[4680]: I0126 18:10:17.265347 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerDied","Data":"9b1fc9b6eb823a42631f9efa4adfe74c2949ba2889f48e73b4725672c8c32a23"} Jan 26 18:10:17 crc kubenswrapper[4680]: I0126 18:10:17.397937 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlhvm"] Jan 26 18:10:18 crc kubenswrapper[4680]: I0126 18:10:18.279046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerStarted","Data":"b757fcc7cccb4e542ac62bc2fca4bd68688ee418cc2b88fec40c2ba851e41724"} Jan 26 18:10:18 crc kubenswrapper[4680]: I0126 18:10:18.283674 4680 generic.go:334] "Generic (PLEG): container finished" podID="7beff252-7486-4796-a37b-1a07e600909c" containerID="b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb" exitCode=0 Jan 26 18:10:18 crc kubenswrapper[4680]: I0126 18:10:18.283713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerDied","Data":"b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb"} Jan 26 18:10:18 crc kubenswrapper[4680]: I0126 18:10:18.283734 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerStarted","Data":"13c064918271fbb97a29feae21682ce44fa668b2cb425ada9474cefa02cd39d4"} Jan 26 18:10:18 crc kubenswrapper[4680]: I0126 18:10:18.370173 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5hkk8" podStartSLOduration=2.838742313 podStartE2EDuration="8.370155907s" podCreationTimestamp="2026-01-26 18:10:10 +0000 UTC" firstStartedPulling="2026-01-26 18:10:12.225096728 +0000 UTC m=+7487.386368997" lastFinishedPulling="2026-01-26 18:10:17.756510322 +0000 UTC m=+7492.917782591" observedRunningTime="2026-01-26 18:10:18.336552764 +0000 UTC m=+7493.497825033" watchObservedRunningTime="2026-01-26 18:10:18.370155907 +0000 UTC m=+7493.531428176" Jan 26 18:10:19 crc kubenswrapper[4680]: I0126 18:10:19.293862 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerStarted","Data":"a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4"} Jan 26 18:10:21 crc kubenswrapper[4680]: I0126 18:10:21.008387 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:21 crc kubenswrapper[4680]: I0126 18:10:21.008710 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:21 crc kubenswrapper[4680]: I0126 18:10:21.315597 4680 generic.go:334] "Generic (PLEG): container finished" podID="7beff252-7486-4796-a37b-1a07e600909c" containerID="a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4" exitCode=0 Jan 26 18:10:21 crc kubenswrapper[4680]: I0126 18:10:21.315648 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerDied","Data":"a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4"} Jan 26 18:10:22 crc kubenswrapper[4680]: I0126 18:10:22.065513 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5hkk8" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="registry-server" probeResult="failure" output=< Jan 26 18:10:22 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:10:22 crc kubenswrapper[4680]: > Jan 26 18:10:22 crc kubenswrapper[4680]: I0126 18:10:22.326707 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerStarted","Data":"ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea"} Jan 26 18:10:22 crc kubenswrapper[4680]: I0126 18:10:22.346050 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlhvm" podStartSLOduration=2.700691571 podStartE2EDuration="6.346035291s" podCreationTimestamp="2026-01-26 18:10:16 +0000 UTC" firstStartedPulling="2026-01-26 18:10:18.286385491 +0000 UTC m=+7493.447657760" lastFinishedPulling="2026-01-26 18:10:21.931729211 +0000 UTC m=+7497.093001480" observedRunningTime="2026-01-26 18:10:22.343285513 +0000 UTC m=+7497.504557782" watchObservedRunningTime="2026-01-26 18:10:22.346035291 +0000 UTC m=+7497.507307560" Jan 26 18:10:23 crc kubenswrapper[4680]: I0126 18:10:23.170660 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:10:23 crc kubenswrapper[4680]: E0126 18:10:23.171927 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:10:26 crc kubenswrapper[4680]: I0126 18:10:26.767120 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:26 crc kubenswrapper[4680]: I0126 18:10:26.768149 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:27 crc kubenswrapper[4680]: I0126 18:10:27.817438 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vlhvm" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="registry-server" probeResult="failure" output=< Jan 26 18:10:27 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:10:27 crc kubenswrapper[4680]: > Jan 26 18:10:32 crc kubenswrapper[4680]: I0126 18:10:32.056212 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5hkk8" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="registry-server" probeResult="failure" output=< Jan 26 18:10:32 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:10:32 crc kubenswrapper[4680]: > Jan 26 18:10:36 crc kubenswrapper[4680]: I0126 18:10:36.827656 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:36 crc kubenswrapper[4680]: I0126 18:10:36.886501 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:37 crc kubenswrapper[4680]: I0126 18:10:37.084565 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlhvm"] Jan 26 18:10:38 crc kubenswrapper[4680]: I0126 18:10:38.170366 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:10:38 crc kubenswrapper[4680]: E0126 18:10:38.170902 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:10:38 crc kubenswrapper[4680]: I0126 18:10:38.492893 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlhvm" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="registry-server" containerID="cri-o://ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea" gracePeriod=2 Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.153819 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.259334 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-utilities\") pod \"7beff252-7486-4796-a37b-1a07e600909c\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.259588 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-catalog-content\") pod \"7beff252-7486-4796-a37b-1a07e600909c\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.259650 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmjfh\" (UniqueName: \"kubernetes.io/projected/7beff252-7486-4796-a37b-1a07e600909c-kube-api-access-fmjfh\") pod \"7beff252-7486-4796-a37b-1a07e600909c\" (UID: \"7beff252-7486-4796-a37b-1a07e600909c\") " Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.260085 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-utilities" (OuterVolumeSpecName: "utilities") pod "7beff252-7486-4796-a37b-1a07e600909c" (UID: "7beff252-7486-4796-a37b-1a07e600909c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.260264 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.273152 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7beff252-7486-4796-a37b-1a07e600909c-kube-api-access-fmjfh" (OuterVolumeSpecName: "kube-api-access-fmjfh") pod "7beff252-7486-4796-a37b-1a07e600909c" (UID: "7beff252-7486-4796-a37b-1a07e600909c"). InnerVolumeSpecName "kube-api-access-fmjfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.339855 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7beff252-7486-4796-a37b-1a07e600909c" (UID: "7beff252-7486-4796-a37b-1a07e600909c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.363227 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7beff252-7486-4796-a37b-1a07e600909c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.363521 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmjfh\" (UniqueName: \"kubernetes.io/projected/7beff252-7486-4796-a37b-1a07e600909c-kube-api-access-fmjfh\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.496627 4680 generic.go:334] "Generic (PLEG): container finished" podID="7beff252-7486-4796-a37b-1a07e600909c" containerID="ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea" exitCode=0 Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.496671 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerDied","Data":"ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea"} Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.496698 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlhvm" event={"ID":"7beff252-7486-4796-a37b-1a07e600909c","Type":"ContainerDied","Data":"13c064918271fbb97a29feae21682ce44fa668b2cb425ada9474cefa02cd39d4"} Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.496716 4680 scope.go:117] "RemoveContainer" containerID="ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.496909 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlhvm" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.540832 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlhvm"] Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.556421 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlhvm"] Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.558975 4680 scope.go:117] "RemoveContainer" containerID="a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.580899 4680 scope.go:117] "RemoveContainer" containerID="b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.627360 4680 scope.go:117] "RemoveContainer" containerID="ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea" Jan 26 18:10:39 crc kubenswrapper[4680]: E0126 18:10:39.633117 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea\": container with ID starting with ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea not found: ID does not exist" containerID="ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.633174 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea"} err="failed to get container status \"ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea\": rpc error: code = NotFound desc = could not find container \"ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea\": container with ID starting with ed7ae04db11060b21f0e56230d37a111fb7f1ef505a405fa72b9cd44bc983eea not found: ID does not exist" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.633205 4680 scope.go:117] "RemoveContainer" containerID="a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4" Jan 26 18:10:39 crc kubenswrapper[4680]: E0126 18:10:39.633590 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4\": container with ID starting with a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4 not found: ID does not exist" containerID="a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.633620 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4"} err="failed to get container status \"a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4\": rpc error: code = NotFound desc = could not find container \"a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4\": container with ID starting with a982cd143e02026979c44c07705c28499c2517fa987362e351f0c7a4c073a0c4 not found: ID does not exist" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.633646 4680 scope.go:117] "RemoveContainer" containerID="b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb" Jan 26 18:10:39 crc kubenswrapper[4680]: E0126 18:10:39.634153 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb\": container with ID starting with b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb not found: ID does not exist" containerID="b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.634429 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb"} err="failed to get container status \"b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb\": rpc error: code = NotFound desc = could not find container \"b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb\": container with ID starting with b4a7ea3d38934290f6076095bcde41903d9ccc0f40369fec6ec7623ab51913cb not found: ID does not exist" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.673190 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xlm5h"] Jan 26 18:10:39 crc kubenswrapper[4680]: E0126 18:10:39.675922 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="registry-server" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.675963 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="registry-server" Jan 26 18:10:39 crc kubenswrapper[4680]: E0126 18:10:39.676002 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="extract-utilities" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.676010 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="extract-utilities" Jan 26 18:10:39 crc kubenswrapper[4680]: E0126 18:10:39.676046 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="extract-content" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.676053 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="extract-content" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.680866 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7beff252-7486-4796-a37b-1a07e600909c" containerName="registry-server" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.694880 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.724907 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlm5h"] Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.873249 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-utilities\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.873639 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6fs\" (UniqueName: \"kubernetes.io/projected/eeadf039-2106-407a-8316-f313f723e3d2-kube-api-access-lj6fs\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.873724 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-catalog-content\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.975991 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-catalog-content\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.976292 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-utilities\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.976602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6fs\" (UniqueName: \"kubernetes.io/projected/eeadf039-2106-407a-8316-f313f723e3d2-kube-api-access-lj6fs\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.976732 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-utilities\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.976600 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-catalog-content\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:39 crc kubenswrapper[4680]: I0126 18:10:39.994217 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6fs\" (UniqueName: \"kubernetes.io/projected/eeadf039-2106-407a-8316-f313f723e3d2-kube-api-access-lj6fs\") pod \"certified-operators-xlm5h\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:40 crc kubenswrapper[4680]: I0126 18:10:40.025681 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:40 crc kubenswrapper[4680]: I0126 18:10:40.615660 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlm5h"] Jan 26 18:10:41 crc kubenswrapper[4680]: I0126 18:10:41.061123 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:41 crc kubenswrapper[4680]: I0126 18:10:41.112746 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:41 crc kubenswrapper[4680]: I0126 18:10:41.181167 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7beff252-7486-4796-a37b-1a07e600909c" path="/var/lib/kubelet/pods/7beff252-7486-4796-a37b-1a07e600909c/volumes" Jan 26 18:10:41 crc kubenswrapper[4680]: I0126 18:10:41.517936 4680 generic.go:334] "Generic (PLEG): container finished" podID="eeadf039-2106-407a-8316-f313f723e3d2" containerID="c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a" exitCode=0 Jan 26 18:10:41 crc kubenswrapper[4680]: I0126 18:10:41.518044 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerDied","Data":"c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a"} Jan 26 18:10:41 crc kubenswrapper[4680]: I0126 18:10:41.518102 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerStarted","Data":"f0523d3be89223b3d647654dc49e17c8d4633e5db1663219752c8403cdd13c88"} Jan 26 18:10:42 crc kubenswrapper[4680]: I0126 18:10:42.534549 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerStarted","Data":"677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf"} Jan 26 18:10:44 crc kubenswrapper[4680]: I0126 18:10:44.469629 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hkk8"] Jan 26 18:10:44 crc kubenswrapper[4680]: I0126 18:10:44.470661 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5hkk8" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="registry-server" containerID="cri-o://b757fcc7cccb4e542ac62bc2fca4bd68688ee418cc2b88fec40c2ba851e41724" gracePeriod=2 Jan 26 18:10:44 crc kubenswrapper[4680]: I0126 18:10:44.553088 4680 generic.go:334] "Generic (PLEG): container finished" podID="eeadf039-2106-407a-8316-f313f723e3d2" containerID="677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf" exitCode=0 Jan 26 18:10:44 crc kubenswrapper[4680]: I0126 18:10:44.553132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerDied","Data":"677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf"} Jan 26 18:10:45 crc kubenswrapper[4680]: I0126 18:10:45.570356 4680 generic.go:334] "Generic (PLEG): container finished" podID="af486d30-e630-45d1-bc94-fad641f26616" containerID="b757fcc7cccb4e542ac62bc2fca4bd68688ee418cc2b88fec40c2ba851e41724" exitCode=0 Jan 26 18:10:45 crc kubenswrapper[4680]: I0126 18:10:45.570573 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerDied","Data":"b757fcc7cccb4e542ac62bc2fca4bd68688ee418cc2b88fec40c2ba851e41724"} Jan 26 18:10:45 crc kubenswrapper[4680]: I0126 18:10:45.847380 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.012820 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvzxd\" (UniqueName: \"kubernetes.io/projected/af486d30-e630-45d1-bc94-fad641f26616-kube-api-access-hvzxd\") pod \"af486d30-e630-45d1-bc94-fad641f26616\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.013026 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-utilities\") pod \"af486d30-e630-45d1-bc94-fad641f26616\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.013214 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-catalog-content\") pod \"af486d30-e630-45d1-bc94-fad641f26616\" (UID: \"af486d30-e630-45d1-bc94-fad641f26616\") " Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.014541 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-utilities" (OuterVolumeSpecName: "utilities") pod "af486d30-e630-45d1-bc94-fad641f26616" (UID: "af486d30-e630-45d1-bc94-fad641f26616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.076653 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af486d30-e630-45d1-bc94-fad641f26616-kube-api-access-hvzxd" (OuterVolumeSpecName: "kube-api-access-hvzxd") pod "af486d30-e630-45d1-bc94-fad641f26616" (UID: "af486d30-e630-45d1-bc94-fad641f26616"). InnerVolumeSpecName "kube-api-access-hvzxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.115769 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvzxd\" (UniqueName: \"kubernetes.io/projected/af486d30-e630-45d1-bc94-fad641f26616-kube-api-access-hvzxd\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.115811 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.162719 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af486d30-e630-45d1-bc94-fad641f26616" (UID: "af486d30-e630-45d1-bc94-fad641f26616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.218337 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af486d30-e630-45d1-bc94-fad641f26616-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.582627 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hkk8" event={"ID":"af486d30-e630-45d1-bc94-fad641f26616","Type":"ContainerDied","Data":"4852e47d12a64315c34a72184537ba959a436de708755f41a7de6ab054e86368"} Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.582681 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hkk8" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.582688 4680 scope.go:117] "RemoveContainer" containerID="b757fcc7cccb4e542ac62bc2fca4bd68688ee418cc2b88fec40c2ba851e41724" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.586232 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerStarted","Data":"ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe"} Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.613787 4680 scope.go:117] "RemoveContainer" containerID="9b1fc9b6eb823a42631f9efa4adfe74c2949ba2889f48e73b4725672c8c32a23" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.614660 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xlm5h" podStartSLOduration=3.656565124 podStartE2EDuration="7.613276925s" podCreationTimestamp="2026-01-26 18:10:39 +0000 UTC" firstStartedPulling="2026-01-26 18:10:41.519831223 +0000 UTC m=+7516.681103492" lastFinishedPulling="2026-01-26 18:10:45.476543024 +0000 UTC m=+7520.637815293" observedRunningTime="2026-01-26 18:10:46.612605716 +0000 UTC m=+7521.773877985" watchObservedRunningTime="2026-01-26 18:10:46.613276925 +0000 UTC m=+7521.774549204" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.645362 4680 scope.go:117] "RemoveContainer" containerID="2b368a156aca7fb69d8df4b1514897f81c16571c7735b9cb4733900bdff3c912" Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.665232 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hkk8"] Jan 26 18:10:46 crc kubenswrapper[4680]: I0126 18:10:46.686627 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5hkk8"] Jan 26 18:10:47 crc kubenswrapper[4680]: I0126 18:10:47.197271 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af486d30-e630-45d1-bc94-fad641f26616" path="/var/lib/kubelet/pods/af486d30-e630-45d1-bc94-fad641f26616/volumes" Jan 26 18:10:50 crc kubenswrapper[4680]: I0126 18:10:50.026345 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:50 crc kubenswrapper[4680]: I0126 18:10:50.026665 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:50 crc kubenswrapper[4680]: I0126 18:10:50.077387 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:50 crc kubenswrapper[4680]: I0126 18:10:50.664380 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:51 crc kubenswrapper[4680]: I0126 18:10:51.862345 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlm5h"] Jan 26 18:10:52 crc kubenswrapper[4680]: I0126 18:10:52.169643 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:10:52 crc kubenswrapper[4680]: E0126 18:10:52.170312 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:10:52 crc kubenswrapper[4680]: I0126 18:10:52.633781 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xlm5h" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="registry-server" containerID="cri-o://ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe" gracePeriod=2 Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.179857 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.266032 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-catalog-content\") pod \"eeadf039-2106-407a-8316-f313f723e3d2\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.266191 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-utilities\") pod \"eeadf039-2106-407a-8316-f313f723e3d2\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.266622 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj6fs\" (UniqueName: \"kubernetes.io/projected/eeadf039-2106-407a-8316-f313f723e3d2-kube-api-access-lj6fs\") pod \"eeadf039-2106-407a-8316-f313f723e3d2\" (UID: \"eeadf039-2106-407a-8316-f313f723e3d2\") " Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.267015 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-utilities" (OuterVolumeSpecName: "utilities") pod "eeadf039-2106-407a-8316-f313f723e3d2" (UID: "eeadf039-2106-407a-8316-f313f723e3d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.267380 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.272031 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeadf039-2106-407a-8316-f313f723e3d2-kube-api-access-lj6fs" (OuterVolumeSpecName: "kube-api-access-lj6fs") pod "eeadf039-2106-407a-8316-f313f723e3d2" (UID: "eeadf039-2106-407a-8316-f313f723e3d2"). InnerVolumeSpecName "kube-api-access-lj6fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.321110 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeadf039-2106-407a-8316-f313f723e3d2" (UID: "eeadf039-2106-407a-8316-f313f723e3d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.368772 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeadf039-2106-407a-8316-f313f723e3d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.369003 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj6fs\" (UniqueName: \"kubernetes.io/projected/eeadf039-2106-407a-8316-f313f723e3d2-kube-api-access-lj6fs\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.643093 4680 generic.go:334] "Generic (PLEG): container finished" podID="eeadf039-2106-407a-8316-f313f723e3d2" containerID="ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe" exitCode=0 Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.643146 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerDied","Data":"ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe"} Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.643157 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlm5h" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.643173 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlm5h" event={"ID":"eeadf039-2106-407a-8316-f313f723e3d2","Type":"ContainerDied","Data":"f0523d3be89223b3d647654dc49e17c8d4633e5db1663219752c8403cdd13c88"} Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.643189 4680 scope.go:117] "RemoveContainer" containerID="ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.663547 4680 scope.go:117] "RemoveContainer" containerID="677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.699648 4680 scope.go:117] "RemoveContainer" containerID="c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.714147 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlm5h"] Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.777879 4680 scope.go:117] "RemoveContainer" containerID="ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.778906 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xlm5h"] Jan 26 18:10:53 crc kubenswrapper[4680]: E0126 18:10:53.790245 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe\": container with ID starting with ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe not found: ID does not exist" containerID="ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.790458 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe"} err="failed to get container status \"ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe\": rpc error: code = NotFound desc = could not find container \"ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe\": container with ID starting with ca800258daf150465b03b97105521eead2e0cb6bd3dd50feea885b1894cafafe not found: ID does not exist" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.790582 4680 scope.go:117] "RemoveContainer" containerID="677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf" Jan 26 18:10:53 crc kubenswrapper[4680]: E0126 18:10:53.811240 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf\": container with ID starting with 677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf not found: ID does not exist" containerID="677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.811295 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf"} err="failed to get container status \"677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf\": rpc error: code = NotFound desc = could not find container \"677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf\": container with ID starting with 677a103cb89d0ca7c9352dc16feb80986f633d864b248f5d8f381d355c8037cf not found: ID does not exist" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.811330 4680 scope.go:117] "RemoveContainer" containerID="c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a" Jan 26 18:10:53 crc kubenswrapper[4680]: E0126 18:10:53.830360 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a\": container with ID starting with c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a not found: ID does not exist" containerID="c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a" Jan 26 18:10:53 crc kubenswrapper[4680]: I0126 18:10:53.830405 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a"} err="failed to get container status \"c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a\": rpc error: code = NotFound desc = could not find container \"c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a\": container with ID starting with c0fd9f9939a7bb0e597840587e305c9fb64efef5c0886b25c318f244179ee58a not found: ID does not exist" Jan 26 18:10:55 crc kubenswrapper[4680]: I0126 18:10:55.182510 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeadf039-2106-407a-8316-f313f723e3d2" path="/var/lib/kubelet/pods/eeadf039-2106-407a-8316-f313f723e3d2/volumes" Jan 26 18:11:07 crc kubenswrapper[4680]: I0126 18:11:07.170007 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:11:07 crc kubenswrapper[4680]: E0126 18:11:07.170833 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:11:21 crc kubenswrapper[4680]: I0126 18:11:21.170022 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:11:21 crc kubenswrapper[4680]: E0126 18:11:21.170897 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:11:34 crc kubenswrapper[4680]: I0126 18:11:34.169731 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:11:34 crc kubenswrapper[4680]: E0126 18:11:34.170645 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:11:45 crc kubenswrapper[4680]: I0126 18:11:45.178723 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:11:45 crc kubenswrapper[4680]: E0126 18:11:45.180873 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:11:59 crc kubenswrapper[4680]: I0126 18:11:59.170409 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:11:59 crc kubenswrapper[4680]: E0126 18:11:59.171720 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.706686 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkhft"] Jan 26 18:12:03 crc kubenswrapper[4680]: E0126 18:12:03.710322 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="registry-server" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.710358 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="registry-server" Jan 26 18:12:03 crc kubenswrapper[4680]: E0126 18:12:03.710376 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="extract-content" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.710383 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="extract-content" Jan 26 18:12:03 crc kubenswrapper[4680]: E0126 18:12:03.710402 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="extract-content" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.710408 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="extract-content" Jan 26 18:12:03 crc kubenswrapper[4680]: E0126 18:12:03.710420 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="extract-utilities" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.710425 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="extract-utilities" Jan 26 18:12:03 crc kubenswrapper[4680]: E0126 18:12:03.710437 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="registry-server" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.710442 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="registry-server" Jan 26 18:12:03 crc kubenswrapper[4680]: E0126 18:12:03.710452 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="extract-utilities" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.710458 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="extract-utilities" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.711038 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeadf039-2106-407a-8316-f313f723e3d2" containerName="registry-server" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.711085 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="af486d30-e630-45d1-bc94-fad641f26616" containerName="registry-server" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.715574 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.729596 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkhft"] Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.822476 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-utilities\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.823600 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lssrs\" (UniqueName: \"kubernetes.io/projected/95d768ad-52c9-49aa-8a03-26637a9ba32d-kube-api-access-lssrs\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.824181 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-catalog-content\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.925662 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-utilities\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.926020 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lssrs\" (UniqueName: \"kubernetes.io/projected/95d768ad-52c9-49aa-8a03-26637a9ba32d-kube-api-access-lssrs\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.926368 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-utilities\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.926867 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-catalog-content\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.927269 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-catalog-content\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:03 crc kubenswrapper[4680]: I0126 18:12:03.949703 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lssrs\" (UniqueName: \"kubernetes.io/projected/95d768ad-52c9-49aa-8a03-26637a9ba32d-kube-api-access-lssrs\") pod \"redhat-marketplace-qkhft\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:04 crc kubenswrapper[4680]: I0126 18:12:04.096400 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:04 crc kubenswrapper[4680]: I0126 18:12:04.851113 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkhft"] Jan 26 18:12:05 crc kubenswrapper[4680]: I0126 18:12:05.282770 4680 generic.go:334] "Generic (PLEG): container finished" podID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerID="f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b" exitCode=0 Jan 26 18:12:05 crc kubenswrapper[4680]: I0126 18:12:05.283093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerDied","Data":"f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b"} Jan 26 18:12:05 crc kubenswrapper[4680]: I0126 18:12:05.283126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerStarted","Data":"77249753997b1b1eb0cedc7412b93594ad88254e89d2d9396fab537d93a63eaf"} Jan 26 18:12:06 crc kubenswrapper[4680]: I0126 18:12:06.293755 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerStarted","Data":"6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9"} Jan 26 18:12:06 crc kubenswrapper[4680]: E0126 18:12:06.983057 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d768ad_52c9_49aa_8a03_26637a9ba32d.slice/crio-conmon-6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d768ad_52c9_49aa_8a03_26637a9ba32d.slice/crio-6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:12:07 crc kubenswrapper[4680]: I0126 18:12:07.317228 4680 generic.go:334] "Generic (PLEG): container finished" podID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerID="6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9" exitCode=0 Jan 26 18:12:07 crc kubenswrapper[4680]: I0126 18:12:07.317592 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerDied","Data":"6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9"} Jan 26 18:12:08 crc kubenswrapper[4680]: I0126 18:12:08.330569 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerStarted","Data":"ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a"} Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.096724 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.097492 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.145217 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.164728 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkhft" podStartSLOduration=8.727641969 podStartE2EDuration="11.164713009s" podCreationTimestamp="2026-01-26 18:12:03 +0000 UTC" firstStartedPulling="2026-01-26 18:12:05.284634972 +0000 UTC m=+7600.445907241" lastFinishedPulling="2026-01-26 18:12:07.721706012 +0000 UTC m=+7602.882978281" observedRunningTime="2026-01-26 18:12:08.35768651 +0000 UTC m=+7603.518958779" watchObservedRunningTime="2026-01-26 18:12:14.164713009 +0000 UTC m=+7609.325985278" Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.169559 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:12:14 crc kubenswrapper[4680]: E0126 18:12:14.170043 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.430791 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:14 crc kubenswrapper[4680]: I0126 18:12:14.477496 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkhft"] Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.404653 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkhft" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="registry-server" containerID="cri-o://ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a" gracePeriod=2 Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.902691 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.971663 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lssrs\" (UniqueName: \"kubernetes.io/projected/95d768ad-52c9-49aa-8a03-26637a9ba32d-kube-api-access-lssrs\") pod \"95d768ad-52c9-49aa-8a03-26637a9ba32d\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.971705 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-catalog-content\") pod \"95d768ad-52c9-49aa-8a03-26637a9ba32d\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.971974 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-utilities\") pod \"95d768ad-52c9-49aa-8a03-26637a9ba32d\" (UID: \"95d768ad-52c9-49aa-8a03-26637a9ba32d\") " Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.973319 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-utilities" (OuterVolumeSpecName: "utilities") pod "95d768ad-52c9-49aa-8a03-26637a9ba32d" (UID: "95d768ad-52c9-49aa-8a03-26637a9ba32d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:12:16 crc kubenswrapper[4680]: I0126 18:12:16.981516 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d768ad-52c9-49aa-8a03-26637a9ba32d-kube-api-access-lssrs" (OuterVolumeSpecName: "kube-api-access-lssrs") pod "95d768ad-52c9-49aa-8a03-26637a9ba32d" (UID: "95d768ad-52c9-49aa-8a03-26637a9ba32d"). InnerVolumeSpecName "kube-api-access-lssrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.003263 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95d768ad-52c9-49aa-8a03-26637a9ba32d" (UID: "95d768ad-52c9-49aa-8a03-26637a9ba32d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.075770 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.075816 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lssrs\" (UniqueName: \"kubernetes.io/projected/95d768ad-52c9-49aa-8a03-26637a9ba32d-kube-api-access-lssrs\") on node \"crc\" DevicePath \"\"" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.075832 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d768ad-52c9-49aa-8a03-26637a9ba32d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.419678 4680 generic.go:334] "Generic (PLEG): container finished" podID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerID="ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a" exitCode=0 Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.419737 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerDied","Data":"ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a"} Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.419770 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkhft" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.420177 4680 scope.go:117] "RemoveContainer" containerID="ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.420162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkhft" event={"ID":"95d768ad-52c9-49aa-8a03-26637a9ba32d","Type":"ContainerDied","Data":"77249753997b1b1eb0cedc7412b93594ad88254e89d2d9396fab537d93a63eaf"} Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.454033 4680 scope.go:117] "RemoveContainer" containerID="6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.458624 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkhft"] Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.468582 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkhft"] Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.480015 4680 scope.go:117] "RemoveContainer" containerID="f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.532358 4680 scope.go:117] "RemoveContainer" containerID="ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a" Jan 26 18:12:17 crc kubenswrapper[4680]: E0126 18:12:17.532971 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a\": container with ID starting with ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a not found: ID does not exist" containerID="ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.533006 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a"} err="failed to get container status \"ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a\": rpc error: code = NotFound desc = could not find container \"ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a\": container with ID starting with ae6f5da63c0d777b26ab961dc7befaa3284db7305808bbe44e63ce849a2ce36a not found: ID does not exist" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.533030 4680 scope.go:117] "RemoveContainer" containerID="6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9" Jan 26 18:12:17 crc kubenswrapper[4680]: E0126 18:12:17.533421 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9\": container with ID starting with 6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9 not found: ID does not exist" containerID="6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.533454 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9"} err="failed to get container status \"6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9\": rpc error: code = NotFound desc = could not find container \"6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9\": container with ID starting with 6cf7a5f09357ce8242e65a07a786a0090c6603563aecd184af7afa47fdcff2a9 not found: ID does not exist" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.533473 4680 scope.go:117] "RemoveContainer" containerID="f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b" Jan 26 18:12:17 crc kubenswrapper[4680]: E0126 18:12:17.533883 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b\": container with ID starting with f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b not found: ID does not exist" containerID="f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b" Jan 26 18:12:17 crc kubenswrapper[4680]: I0126 18:12:17.533907 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b"} err="failed to get container status \"f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b\": rpc error: code = NotFound desc = could not find container \"f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b\": container with ID starting with f9d63a54f14e656efdabb66a9d890fd24836b8d05ae8805ba2b7a76cb7ad470b not found: ID does not exist" Jan 26 18:12:19 crc kubenswrapper[4680]: I0126 18:12:19.179327 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" path="/var/lib/kubelet/pods/95d768ad-52c9-49aa-8a03-26637a9ba32d/volumes" Jan 26 18:12:27 crc kubenswrapper[4680]: I0126 18:12:27.169440 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:12:27 crc kubenswrapper[4680]: E0126 18:12:27.170268 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:12:40 crc kubenswrapper[4680]: I0126 18:12:40.170564 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:12:40 crc kubenswrapper[4680]: E0126 18:12:40.171358 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:12:53 crc kubenswrapper[4680]: I0126 18:12:53.169503 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:12:53 crc kubenswrapper[4680]: E0126 18:12:53.170456 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:13:08 crc kubenswrapper[4680]: I0126 18:13:08.170065 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:13:08 crc kubenswrapper[4680]: E0126 18:13:08.170723 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:13:21 crc kubenswrapper[4680]: I0126 18:13:21.169632 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:13:21 crc kubenswrapper[4680]: E0126 18:13:21.170405 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:13:35 crc kubenswrapper[4680]: I0126 18:13:35.176567 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:13:35 crc kubenswrapper[4680]: E0126 18:13:35.177223 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:13:47 crc kubenswrapper[4680]: I0126 18:13:47.169463 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:13:47 crc kubenswrapper[4680]: E0126 18:13:47.170420 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:13:58 crc kubenswrapper[4680]: I0126 18:13:58.169667 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:13:58 crc kubenswrapper[4680]: E0126 18:13:58.170403 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:14:10 crc kubenswrapper[4680]: I0126 18:14:10.169510 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:14:10 crc kubenswrapper[4680]: E0126 18:14:10.170402 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:14:24 crc kubenswrapper[4680]: I0126 18:14:24.170597 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:14:25 crc kubenswrapper[4680]: I0126 18:14:25.050551 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"934bf195145a3cd9eae3b860b0665e12cb842dc618adc0937f5cbd1c143d2ce4"} Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.200290 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k"] Jan 26 18:15:00 crc kubenswrapper[4680]: E0126 18:15:00.202776 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="extract-utilities" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.203945 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="extract-utilities" Jan 26 18:15:00 crc kubenswrapper[4680]: E0126 18:15:00.204010 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.204091 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4680]: E0126 18:15:00.204190 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="extract-content" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.204251 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="extract-content" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.204494 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d768ad-52c9-49aa-8a03-26637a9ba32d" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.205883 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.216035 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.216258 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.219155 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k"] Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.282654 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-config-volume\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.282978 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-secret-volume\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.283528 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nssgz\" (UniqueName: \"kubernetes.io/projected/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-kube-api-access-nssgz\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.385612 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nssgz\" (UniqueName: \"kubernetes.io/projected/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-kube-api-access-nssgz\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.385824 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-config-volume\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.385865 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-secret-volume\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.386786 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-config-volume\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.403277 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-secret-volume\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.406869 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nssgz\" (UniqueName: \"kubernetes.io/projected/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-kube-api-access-nssgz\") pod \"collect-profiles-29490855-p6d7k\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:00 crc kubenswrapper[4680]: I0126 18:15:00.529081 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:01 crc kubenswrapper[4680]: I0126 18:15:01.034749 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k"] Jan 26 18:15:01 crc kubenswrapper[4680]: W0126 18:15:01.034872 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9efba4ed_d64f_4c8d_89c0_b8d217c3f717.slice/crio-0540a4a418d9c1082a252113803847a871166983214d7fbc95449e0e6706c3bd WatchSource:0}: Error finding container 0540a4a418d9c1082a252113803847a871166983214d7fbc95449e0e6706c3bd: Status 404 returned error can't find the container with id 0540a4a418d9c1082a252113803847a871166983214d7fbc95449e0e6706c3bd Jan 26 18:15:01 crc kubenswrapper[4680]: I0126 18:15:01.639630 4680 generic.go:334] "Generic (PLEG): container finished" podID="9efba4ed-d64f-4c8d-89c0-b8d217c3f717" containerID="9b89eb1436bbded21e46bb5e8cbfd4a859a0b64726883562b58b13930310d87f" exitCode=0 Jan 26 18:15:01 crc kubenswrapper[4680]: I0126 18:15:01.639805 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" event={"ID":"9efba4ed-d64f-4c8d-89c0-b8d217c3f717","Type":"ContainerDied","Data":"9b89eb1436bbded21e46bb5e8cbfd4a859a0b64726883562b58b13930310d87f"} Jan 26 18:15:01 crc kubenswrapper[4680]: I0126 18:15:01.640244 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" event={"ID":"9efba4ed-d64f-4c8d-89c0-b8d217c3f717","Type":"ContainerStarted","Data":"0540a4a418d9c1082a252113803847a871166983214d7fbc95449e0e6706c3bd"} Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.046281 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.149785 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-config-volume\") pod \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.149922 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nssgz\" (UniqueName: \"kubernetes.io/projected/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-kube-api-access-nssgz\") pod \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.150045 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-secret-volume\") pod \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\" (UID: \"9efba4ed-d64f-4c8d-89c0-b8d217c3f717\") " Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.150655 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-config-volume" (OuterVolumeSpecName: "config-volume") pod "9efba4ed-d64f-4c8d-89c0-b8d217c3f717" (UID: "9efba4ed-d64f-4c8d-89c0-b8d217c3f717"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.155892 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-kube-api-access-nssgz" (OuterVolumeSpecName: "kube-api-access-nssgz") pod "9efba4ed-d64f-4c8d-89c0-b8d217c3f717" (UID: "9efba4ed-d64f-4c8d-89c0-b8d217c3f717"). InnerVolumeSpecName "kube-api-access-nssgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.157239 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9efba4ed-d64f-4c8d-89c0-b8d217c3f717" (UID: "9efba4ed-d64f-4c8d-89c0-b8d217c3f717"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.253048 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.253122 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.253136 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nssgz\" (UniqueName: \"kubernetes.io/projected/9efba4ed-d64f-4c8d-89c0-b8d217c3f717-kube-api-access-nssgz\") on node \"crc\" DevicePath \"\"" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.656146 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" event={"ID":"9efba4ed-d64f-4c8d-89c0-b8d217c3f717","Type":"ContainerDied","Data":"0540a4a418d9c1082a252113803847a871166983214d7fbc95449e0e6706c3bd"} Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.656193 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0540a4a418d9c1082a252113803847a871166983214d7fbc95449e0e6706c3bd" Jan 26 18:15:03 crc kubenswrapper[4680]: I0126 18:15:03.656434 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-p6d7k" Jan 26 18:15:04 crc kubenswrapper[4680]: I0126 18:15:04.170515 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr"] Jan 26 18:15:04 crc kubenswrapper[4680]: I0126 18:15:04.178674 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-ngbpr"] Jan 26 18:15:05 crc kubenswrapper[4680]: I0126 18:15:05.180477 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ca6cc9-f7a4-47b0-953f-508d6665fff6" path="/var/lib/kubelet/pods/e0ca6cc9-f7a4-47b0-953f-508d6665fff6/volumes" Jan 26 18:15:48 crc kubenswrapper[4680]: I0126 18:15:48.479714 4680 scope.go:117] "RemoveContainer" containerID="868f589dee77fcc109fc87e5d8a1806debb0702f42d795d1e45f37cfc4647fd1" Jan 26 18:16:46 crc kubenswrapper[4680]: I0126 18:16:46.980597 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:16:46 crc kubenswrapper[4680]: I0126 18:16:46.981185 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:17:16 crc kubenswrapper[4680]: I0126 18:17:16.981211 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:17:16 crc kubenswrapper[4680]: I0126 18:17:16.981750 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:17:46 crc kubenswrapper[4680]: I0126 18:17:46.981330 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:17:46 crc kubenswrapper[4680]: I0126 18:17:46.981872 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:17:46 crc kubenswrapper[4680]: I0126 18:17:46.981915 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 18:17:46 crc kubenswrapper[4680]: I0126 18:17:46.982844 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"934bf195145a3cd9eae3b860b0665e12cb842dc618adc0937f5cbd1c143d2ce4"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:17:46 crc kubenswrapper[4680]: I0126 18:17:46.982908 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://934bf195145a3cd9eae3b860b0665e12cb842dc618adc0937f5cbd1c143d2ce4" gracePeriod=600 Jan 26 18:17:48 crc kubenswrapper[4680]: I0126 18:17:48.036122 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="934bf195145a3cd9eae3b860b0665e12cb842dc618adc0937f5cbd1c143d2ce4" exitCode=0 Jan 26 18:17:48 crc kubenswrapper[4680]: I0126 18:17:48.036191 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"934bf195145a3cd9eae3b860b0665e12cb842dc618adc0937f5cbd1c143d2ce4"} Jan 26 18:17:48 crc kubenswrapper[4680]: I0126 18:17:48.036627 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4"} Jan 26 18:17:48 crc kubenswrapper[4680]: I0126 18:17:48.036649 4680 scope.go:117] "RemoveContainer" containerID="0389adb0e688ab296de79b4de44665d75d14da63ce7337beb452da731b00cf72" Jan 26 18:20:16 crc kubenswrapper[4680]: I0126 18:20:16.980566 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:20:16 crc kubenswrapper[4680]: I0126 18:20:16.981128 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.096226 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9mvp"] Jan 26 18:20:20 crc kubenswrapper[4680]: E0126 18:20:20.097142 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efba4ed-d64f-4c8d-89c0-b8d217c3f717" containerName="collect-profiles" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.097155 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efba4ed-d64f-4c8d-89c0-b8d217c3f717" containerName="collect-profiles" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.101410 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efba4ed-d64f-4c8d-89c0-b8d217c3f717" containerName="collect-profiles" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.103150 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.130402 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9mvp"] Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.140734 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-catalog-content\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.140831 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-utilities\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.141001 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96fq\" (UniqueName: \"kubernetes.io/projected/e9452808-921b-4d19-bb59-c6fe28cea62e-kube-api-access-x96fq\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.243241 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-utilities\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.243499 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96fq\" (UniqueName: \"kubernetes.io/projected/e9452808-921b-4d19-bb59-c6fe28cea62e-kube-api-access-x96fq\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.243700 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-catalog-content\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.244264 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-utilities\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.245678 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-catalog-content\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.277810 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96fq\" (UniqueName: \"kubernetes.io/projected/e9452808-921b-4d19-bb59-c6fe28cea62e-kube-api-access-x96fq\") pod \"redhat-operators-j9mvp\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:20 crc kubenswrapper[4680]: I0126 18:20:20.426643 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:21 crc kubenswrapper[4680]: I0126 18:20:21.347191 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9mvp"] Jan 26 18:20:22 crc kubenswrapper[4680]: I0126 18:20:22.388104 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerDied","Data":"1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24"} Jan 26 18:20:22 crc kubenswrapper[4680]: I0126 18:20:22.388150 4680 generic.go:334] "Generic (PLEG): container finished" podID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerID="1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24" exitCode=0 Jan 26 18:20:22 crc kubenswrapper[4680]: I0126 18:20:22.388837 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerStarted","Data":"a541941b865dd7a46177f0976f9b447636820b557af597e44a7eb634598a9bc9"} Jan 26 18:20:22 crc kubenswrapper[4680]: I0126 18:20:22.391102 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:20:23 crc kubenswrapper[4680]: I0126 18:20:23.398772 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerStarted","Data":"6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa"} Jan 26 18:20:24 crc kubenswrapper[4680]: E0126 18:20:24.760333 4680 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.20:40892->38.102.83.20:45165: read tcp 38.102.83.20:40892->38.102.83.20:45165: read: connection reset by peer Jan 26 18:20:27 crc kubenswrapper[4680]: I0126 18:20:27.435143 4680 generic.go:334] "Generic (PLEG): container finished" podID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerID="6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa" exitCode=0 Jan 26 18:20:27 crc kubenswrapper[4680]: I0126 18:20:27.435232 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerDied","Data":"6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa"} Jan 26 18:20:28 crc kubenswrapper[4680]: I0126 18:20:28.447635 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerStarted","Data":"4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b"} Jan 26 18:20:28 crc kubenswrapper[4680]: I0126 18:20:28.474349 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9mvp" podStartSLOduration=2.866739305 podStartE2EDuration="8.473964093s" podCreationTimestamp="2026-01-26 18:20:20 +0000 UTC" firstStartedPulling="2026-01-26 18:20:22.389672864 +0000 UTC m=+8097.550945133" lastFinishedPulling="2026-01-26 18:20:27.996897652 +0000 UTC m=+8103.158169921" observedRunningTime="2026-01-26 18:20:28.464538657 +0000 UTC m=+8103.625810936" watchObservedRunningTime="2026-01-26 18:20:28.473964093 +0000 UTC m=+8103.635236362" Jan 26 18:20:30 crc kubenswrapper[4680]: I0126 18:20:30.427386 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:30 crc kubenswrapper[4680]: I0126 18:20:30.427673 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:31 crc kubenswrapper[4680]: I0126 18:20:31.497437 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9mvp" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="registry-server" probeResult="failure" output=< Jan 26 18:20:31 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:20:31 crc kubenswrapper[4680]: > Jan 26 18:20:34 crc kubenswrapper[4680]: I0126 18:20:34.971588 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x5529"] Jan 26 18:20:34 crc kubenswrapper[4680]: I0126 18:20:34.974306 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:34 crc kubenswrapper[4680]: I0126 18:20:34.986109 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5529"] Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.072793 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzc5\" (UniqueName: \"kubernetes.io/projected/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-kube-api-access-glzc5\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.072842 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-catalog-content\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.072975 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-utilities\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.174745 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-utilities\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.174829 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glzc5\" (UniqueName: \"kubernetes.io/projected/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-kube-api-access-glzc5\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.174853 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-catalog-content\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.175414 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-catalog-content\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.179063 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-utilities\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.210231 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glzc5\" (UniqueName: \"kubernetes.io/projected/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-kube-api-access-glzc5\") pod \"community-operators-x5529\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.355411 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:35 crc kubenswrapper[4680]: I0126 18:20:35.931867 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5529"] Jan 26 18:20:35 crc kubenswrapper[4680]: W0126 18:20:35.932665 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0adbfd_b62c_4921_b6e8_3bb17af01fbd.slice/crio-7fb324d4163fed6b7344a376b401d122bc0cad176f2c3b3ec60a6b06fe94b662 WatchSource:0}: Error finding container 7fb324d4163fed6b7344a376b401d122bc0cad176f2c3b3ec60a6b06fe94b662: Status 404 returned error can't find the container with id 7fb324d4163fed6b7344a376b401d122bc0cad176f2c3b3ec60a6b06fe94b662 Jan 26 18:20:36 crc kubenswrapper[4680]: I0126 18:20:36.520377 4680 generic.go:334] "Generic (PLEG): container finished" podID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerID="e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397" exitCode=0 Jan 26 18:20:36 crc kubenswrapper[4680]: I0126 18:20:36.520680 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerDied","Data":"e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397"} Jan 26 18:20:36 crc kubenswrapper[4680]: I0126 18:20:36.520715 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerStarted","Data":"7fb324d4163fed6b7344a376b401d122bc0cad176f2c3b3ec60a6b06fe94b662"} Jan 26 18:20:38 crc kubenswrapper[4680]: I0126 18:20:38.543326 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerStarted","Data":"ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7"} Jan 26 18:20:39 crc kubenswrapper[4680]: I0126 18:20:39.556821 4680 generic.go:334] "Generic (PLEG): container finished" podID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerID="ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7" exitCode=0 Jan 26 18:20:39 crc kubenswrapper[4680]: I0126 18:20:39.556876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerDied","Data":"ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7"} Jan 26 18:20:40 crc kubenswrapper[4680]: I0126 18:20:40.566498 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerStarted","Data":"1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc"} Jan 26 18:20:40 crc kubenswrapper[4680]: I0126 18:20:40.598192 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x5529" podStartSLOduration=3.085075178 podStartE2EDuration="6.59817164s" podCreationTimestamp="2026-01-26 18:20:34 +0000 UTC" firstStartedPulling="2026-01-26 18:20:36.523531798 +0000 UTC m=+8111.684804057" lastFinishedPulling="2026-01-26 18:20:40.03662825 +0000 UTC m=+8115.197900519" observedRunningTime="2026-01-26 18:20:40.588894748 +0000 UTC m=+8115.750167017" watchObservedRunningTime="2026-01-26 18:20:40.59817164 +0000 UTC m=+8115.759443909" Jan 26 18:20:41 crc kubenswrapper[4680]: I0126 18:20:41.489470 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9mvp" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="registry-server" probeResult="failure" output=< Jan 26 18:20:41 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:20:41 crc kubenswrapper[4680]: > Jan 26 18:20:45 crc kubenswrapper[4680]: I0126 18:20:45.356719 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:45 crc kubenswrapper[4680]: I0126 18:20:45.357444 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:46 crc kubenswrapper[4680]: I0126 18:20:46.410270 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x5529" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="registry-server" probeResult="failure" output=< Jan 26 18:20:46 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:20:46 crc kubenswrapper[4680]: > Jan 26 18:20:46 crc kubenswrapper[4680]: I0126 18:20:46.980992 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:20:46 crc kubenswrapper[4680]: I0126 18:20:46.981579 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:20:50 crc kubenswrapper[4680]: I0126 18:20:50.499573 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:50 crc kubenswrapper[4680]: I0126 18:20:50.559881 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:51 crc kubenswrapper[4680]: I0126 18:20:51.300650 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9mvp"] Jan 26 18:20:51 crc kubenswrapper[4680]: I0126 18:20:51.679294 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9mvp" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="registry-server" containerID="cri-o://4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b" gracePeriod=2 Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.674750 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.689600 4680 generic.go:334] "Generic (PLEG): container finished" podID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerID="4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b" exitCode=0 Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.690753 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerDied","Data":"4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b"} Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.690876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9mvp" event={"ID":"e9452808-921b-4d19-bb59-c6fe28cea62e","Type":"ContainerDied","Data":"a541941b865dd7a46177f0976f9b447636820b557af597e44a7eb634598a9bc9"} Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.691122 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9mvp" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.691456 4680 scope.go:117] "RemoveContainer" containerID="4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.729990 4680 scope.go:117] "RemoveContainer" containerID="6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.759327 4680 scope.go:117] "RemoveContainer" containerID="1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.767926 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-utilities\") pod \"e9452808-921b-4d19-bb59-c6fe28cea62e\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.768050 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x96fq\" (UniqueName: \"kubernetes.io/projected/e9452808-921b-4d19-bb59-c6fe28cea62e-kube-api-access-x96fq\") pod \"e9452808-921b-4d19-bb59-c6fe28cea62e\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.768149 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-catalog-content\") pod \"e9452808-921b-4d19-bb59-c6fe28cea62e\" (UID: \"e9452808-921b-4d19-bb59-c6fe28cea62e\") " Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.770665 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-utilities" (OuterVolumeSpecName: "utilities") pod "e9452808-921b-4d19-bb59-c6fe28cea62e" (UID: "e9452808-921b-4d19-bb59-c6fe28cea62e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.783024 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9452808-921b-4d19-bb59-c6fe28cea62e-kube-api-access-x96fq" (OuterVolumeSpecName: "kube-api-access-x96fq") pod "e9452808-921b-4d19-bb59-c6fe28cea62e" (UID: "e9452808-921b-4d19-bb59-c6fe28cea62e"). InnerVolumeSpecName "kube-api-access-x96fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.809367 4680 scope.go:117] "RemoveContainer" containerID="4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b" Jan 26 18:20:52 crc kubenswrapper[4680]: E0126 18:20:52.811434 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b\": container with ID starting with 4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b not found: ID does not exist" containerID="4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.811839 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b"} err="failed to get container status \"4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b\": rpc error: code = NotFound desc = could not find container \"4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b\": container with ID starting with 4515946f0c005da39a94f4401c08cd945c0cf89411c5cd7f0a1b816deea0db5b not found: ID does not exist" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.811968 4680 scope.go:117] "RemoveContainer" containerID="6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa" Jan 26 18:20:52 crc kubenswrapper[4680]: E0126 18:20:52.813206 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa\": container with ID starting with 6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa not found: ID does not exist" containerID="6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.813272 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa"} err="failed to get container status \"6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa\": rpc error: code = NotFound desc = could not find container \"6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa\": container with ID starting with 6c79d52f377c032e48f00088bdbf46341a781390557e83e9173cea2095cbaffa not found: ID does not exist" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.813306 4680 scope.go:117] "RemoveContainer" containerID="1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24" Jan 26 18:20:52 crc kubenswrapper[4680]: E0126 18:20:52.813734 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24\": container with ID starting with 1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24 not found: ID does not exist" containerID="1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.813906 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24"} err="failed to get container status \"1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24\": rpc error: code = NotFound desc = could not find container \"1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24\": container with ID starting with 1fd3aaae357371c2192012153e3a824ace02629936429f0d74e8ad6ae436ca24 not found: ID does not exist" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.871251 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x96fq\" (UniqueName: \"kubernetes.io/projected/e9452808-921b-4d19-bb59-c6fe28cea62e-kube-api-access-x96fq\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.871658 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.944203 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9452808-921b-4d19-bb59-c6fe28cea62e" (UID: "e9452808-921b-4d19-bb59-c6fe28cea62e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:20:52 crc kubenswrapper[4680]: I0126 18:20:52.973372 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9452808-921b-4d19-bb59-c6fe28cea62e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:53 crc kubenswrapper[4680]: I0126 18:20:53.068133 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9mvp"] Jan 26 18:20:53 crc kubenswrapper[4680]: I0126 18:20:53.083880 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9mvp"] Jan 26 18:20:53 crc kubenswrapper[4680]: I0126 18:20:53.185663 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" path="/var/lib/kubelet/pods/e9452808-921b-4d19-bb59-c6fe28cea62e/volumes" Jan 26 18:20:55 crc kubenswrapper[4680]: I0126 18:20:55.401286 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:55 crc kubenswrapper[4680]: I0126 18:20:55.452275 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:55 crc kubenswrapper[4680]: I0126 18:20:55.702441 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5529"] Jan 26 18:20:56 crc kubenswrapper[4680]: I0126 18:20:56.742002 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x5529" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="registry-server" containerID="cri-o://1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc" gracePeriod=2 Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.494449 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.672144 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-catalog-content\") pod \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.672245 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glzc5\" (UniqueName: \"kubernetes.io/projected/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-kube-api-access-glzc5\") pod \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.672276 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-utilities\") pod \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\" (UID: \"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd\") " Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.672877 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-utilities" (OuterVolumeSpecName: "utilities") pod "8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" (UID: "8e0adbfd-b62c-4921-b6e8-3bb17af01fbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.687297 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-kube-api-access-glzc5" (OuterVolumeSpecName: "kube-api-access-glzc5") pod "8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" (UID: "8e0adbfd-b62c-4921-b6e8-3bb17af01fbd"). InnerVolumeSpecName "kube-api-access-glzc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.739716 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" (UID: "8e0adbfd-b62c-4921-b6e8-3bb17af01fbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.752667 4680 generic.go:334] "Generic (PLEG): container finished" podID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerID="1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc" exitCode=0 Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.752737 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5529" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.752705 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerDied","Data":"1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc"} Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.753008 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5529" event={"ID":"8e0adbfd-b62c-4921-b6e8-3bb17af01fbd","Type":"ContainerDied","Data":"7fb324d4163fed6b7344a376b401d122bc0cad176f2c3b3ec60a6b06fe94b662"} Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.753038 4680 scope.go:117] "RemoveContainer" containerID="1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.775855 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.775970 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glzc5\" (UniqueName: \"kubernetes.io/projected/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-kube-api-access-glzc5\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.775988 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.814177 4680 scope.go:117] "RemoveContainer" containerID="ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.827192 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5529"] Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.836958 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x5529"] Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.842540 4680 scope.go:117] "RemoveContainer" containerID="e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.887755 4680 scope.go:117] "RemoveContainer" containerID="1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc" Jan 26 18:20:57 crc kubenswrapper[4680]: E0126 18:20:57.888245 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc\": container with ID starting with 1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc not found: ID does not exist" containerID="1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.888284 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc"} err="failed to get container status \"1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc\": rpc error: code = NotFound desc = could not find container \"1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc\": container with ID starting with 1441520d2a8c0cc07325f1f977bc0e53e0e7b469af556d8b2499d3dd24f123fc not found: ID does not exist" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.888310 4680 scope.go:117] "RemoveContainer" containerID="ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7" Jan 26 18:20:57 crc kubenswrapper[4680]: E0126 18:20:57.888608 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7\": container with ID starting with ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7 not found: ID does not exist" containerID="ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.888633 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7"} err="failed to get container status \"ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7\": rpc error: code = NotFound desc = could not find container \"ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7\": container with ID starting with ec2760e653a9d490a6e3475c8b661af039876cd30784ab9a59b3a2bcb7510ef7 not found: ID does not exist" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.888645 4680 scope.go:117] "RemoveContainer" containerID="e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397" Jan 26 18:20:57 crc kubenswrapper[4680]: E0126 18:20:57.888963 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397\": container with ID starting with e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397 not found: ID does not exist" containerID="e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397" Jan 26 18:20:57 crc kubenswrapper[4680]: I0126 18:20:57.888985 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397"} err="failed to get container status \"e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397\": rpc error: code = NotFound desc = could not find container \"e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397\": container with ID starting with e29f6f1b06d952557edfdbb9f0f759e1147c2396f94fb81e791ac6b180c4b397 not found: ID does not exist" Jan 26 18:20:59 crc kubenswrapper[4680]: I0126 18:20:59.182865 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" path="/var/lib/kubelet/pods/8e0adbfd-b62c-4921-b6e8-3bb17af01fbd/volumes" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.648441 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlbxb"] Jan 26 18:21:06 crc kubenswrapper[4680]: E0126 18:21:06.649478 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="extract-utilities" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652545 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="extract-utilities" Jan 26 18:21:06 crc kubenswrapper[4680]: E0126 18:21:06.652586 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="extract-content" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652603 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="extract-content" Jan 26 18:21:06 crc kubenswrapper[4680]: E0126 18:21:06.652620 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="registry-server" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652626 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="registry-server" Jan 26 18:21:06 crc kubenswrapper[4680]: E0126 18:21:06.652638 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="extract-utilities" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652645 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="extract-utilities" Jan 26 18:21:06 crc kubenswrapper[4680]: E0126 18:21:06.652665 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="registry-server" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652671 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="registry-server" Jan 26 18:21:06 crc kubenswrapper[4680]: E0126 18:21:06.652697 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="extract-content" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652703 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="extract-content" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652923 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9452808-921b-4d19-bb59-c6fe28cea62e" containerName="registry-server" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.652954 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0adbfd-b62c-4921-b6e8-3bb17af01fbd" containerName="registry-server" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.654394 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.665960 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlbxb"] Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.749290 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-catalog-content\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.749398 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-utilities\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.749504 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvcb\" (UniqueName: \"kubernetes.io/projected/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-kube-api-access-4rvcb\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.855663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rvcb\" (UniqueName: \"kubernetes.io/projected/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-kube-api-access-4rvcb\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.855899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-catalog-content\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.855976 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-utilities\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.856713 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-utilities\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.857416 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-catalog-content\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.892045 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rvcb\" (UniqueName: \"kubernetes.io/projected/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-kube-api-access-4rvcb\") pod \"certified-operators-vlbxb\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:06 crc kubenswrapper[4680]: I0126 18:21:06.976143 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:07 crc kubenswrapper[4680]: I0126 18:21:07.494909 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlbxb"] Jan 26 18:21:07 crc kubenswrapper[4680]: I0126 18:21:07.851459 4680 generic.go:334] "Generic (PLEG): container finished" podID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerID="cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071" exitCode=0 Jan 26 18:21:07 crc kubenswrapper[4680]: I0126 18:21:07.851781 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerDied","Data":"cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071"} Jan 26 18:21:07 crc kubenswrapper[4680]: I0126 18:21:07.851879 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerStarted","Data":"7707a318e7229f0e029c5f1e6cc5533827d34576d38fde795089d5a9978b631e"} Jan 26 18:21:08 crc kubenswrapper[4680]: I0126 18:21:08.866113 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerStarted","Data":"1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb"} Jan 26 18:21:10 crc kubenswrapper[4680]: I0126 18:21:10.887392 4680 generic.go:334] "Generic (PLEG): container finished" podID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerID="1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb" exitCode=0 Jan 26 18:21:10 crc kubenswrapper[4680]: I0126 18:21:10.887876 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerDied","Data":"1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb"} Jan 26 18:21:11 crc kubenswrapper[4680]: I0126 18:21:11.898277 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerStarted","Data":"aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616"} Jan 26 18:21:11 crc kubenswrapper[4680]: I0126 18:21:11.917285 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlbxb" podStartSLOduration=2.465347643 podStartE2EDuration="5.917267091s" podCreationTimestamp="2026-01-26 18:21:06 +0000 UTC" firstStartedPulling="2026-01-26 18:21:07.853408063 +0000 UTC m=+8143.014680332" lastFinishedPulling="2026-01-26 18:21:11.305327511 +0000 UTC m=+8146.466599780" observedRunningTime="2026-01-26 18:21:11.91472908 +0000 UTC m=+8147.076001369" watchObservedRunningTime="2026-01-26 18:21:11.917267091 +0000 UTC m=+8147.078539360" Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.976739 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.977339 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.980431 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.980482 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.980520 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.981208 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:21:16 crc kubenswrapper[4680]: I0126 18:21:16.981272 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" gracePeriod=600 Jan 26 18:21:17 crc kubenswrapper[4680]: I0126 18:21:17.039393 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:17 crc kubenswrapper[4680]: E0126 18:21:17.105534 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:21:17 crc kubenswrapper[4680]: I0126 18:21:17.951423 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" exitCode=0 Jan 26 18:21:17 crc kubenswrapper[4680]: I0126 18:21:17.951517 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4"} Jan 26 18:21:17 crc kubenswrapper[4680]: I0126 18:21:17.952234 4680 scope.go:117] "RemoveContainer" containerID="934bf195145a3cd9eae3b860b0665e12cb842dc618adc0937f5cbd1c143d2ce4" Jan 26 18:21:17 crc kubenswrapper[4680]: I0126 18:21:17.954095 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:21:17 crc kubenswrapper[4680]: E0126 18:21:17.954358 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:21:18 crc kubenswrapper[4680]: I0126 18:21:18.018692 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:18 crc kubenswrapper[4680]: I0126 18:21:18.135125 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlbxb"] Jan 26 18:21:19 crc kubenswrapper[4680]: I0126 18:21:19.971249 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlbxb" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="registry-server" containerID="cri-o://aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616" gracePeriod=2 Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.669313 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.784799 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-catalog-content\") pod \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.785008 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rvcb\" (UniqueName: \"kubernetes.io/projected/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-kube-api-access-4rvcb\") pod \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.785088 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-utilities\") pod \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\" (UID: \"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f\") " Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.786256 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-utilities" (OuterVolumeSpecName: "utilities") pod "782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" (UID: "782c4b9a-c7b2-47b0-a4bf-3691b9156b9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.794968 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-kube-api-access-4rvcb" (OuterVolumeSpecName: "kube-api-access-4rvcb") pod "782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" (UID: "782c4b9a-c7b2-47b0-a4bf-3691b9156b9f"). InnerVolumeSpecName "kube-api-access-4rvcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.832710 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" (UID: "782c4b9a-c7b2-47b0-a4bf-3691b9156b9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.887181 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rvcb\" (UniqueName: \"kubernetes.io/projected/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-kube-api-access-4rvcb\") on node \"crc\" DevicePath \"\"" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.887214 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.887223 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.989018 4680 generic.go:334] "Generic (PLEG): container finished" podID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerID="aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616" exitCode=0 Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.989057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerDied","Data":"aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616"} Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.989095 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlbxb" event={"ID":"782c4b9a-c7b2-47b0-a4bf-3691b9156b9f","Type":"ContainerDied","Data":"7707a318e7229f0e029c5f1e6cc5533827d34576d38fde795089d5a9978b631e"} Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.989115 4680 scope.go:117] "RemoveContainer" containerID="aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616" Jan 26 18:21:20 crc kubenswrapper[4680]: I0126 18:21:20.989112 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlbxb" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.018886 4680 scope.go:117] "RemoveContainer" containerID="1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.024543 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlbxb"] Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.034823 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlbxb"] Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.059331 4680 scope.go:117] "RemoveContainer" containerID="cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.100189 4680 scope.go:117] "RemoveContainer" containerID="aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616" Jan 26 18:21:21 crc kubenswrapper[4680]: E0126 18:21:21.100681 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616\": container with ID starting with aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616 not found: ID does not exist" containerID="aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.100750 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616"} err="failed to get container status \"aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616\": rpc error: code = NotFound desc = could not find container \"aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616\": container with ID starting with aecd149e89b817a344efcb3d7f315f81df53a393c5fb49128708ee076d617616 not found: ID does not exist" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.100778 4680 scope.go:117] "RemoveContainer" containerID="1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb" Jan 26 18:21:21 crc kubenswrapper[4680]: E0126 18:21:21.101199 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb\": container with ID starting with 1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb not found: ID does not exist" containerID="1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.101234 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb"} err="failed to get container status \"1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb\": rpc error: code = NotFound desc = could not find container \"1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb\": container with ID starting with 1250e34f4c8d3b583f3a83f40c3fbacb45ec30236759f5dcde7668fdc09b5ffb not found: ID does not exist" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.101277 4680 scope.go:117] "RemoveContainer" containerID="cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071" Jan 26 18:21:21 crc kubenswrapper[4680]: E0126 18:21:21.101581 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071\": container with ID starting with cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071 not found: ID does not exist" containerID="cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.101602 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071"} err="failed to get container status \"cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071\": rpc error: code = NotFound desc = could not find container \"cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071\": container with ID starting with cf6bd09612cf8c3b2490e957a5a9004898492a7033b1b4711bebb62e96e6b071 not found: ID does not exist" Jan 26 18:21:21 crc kubenswrapper[4680]: I0126 18:21:21.183372 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" path="/var/lib/kubelet/pods/782c4b9a-c7b2-47b0-a4bf-3691b9156b9f/volumes" Jan 26 18:21:33 crc kubenswrapper[4680]: I0126 18:21:33.170603 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:21:33 crc kubenswrapper[4680]: E0126 18:21:33.171820 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:21:47 crc kubenswrapper[4680]: I0126 18:21:47.170638 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:21:47 crc kubenswrapper[4680]: E0126 18:21:47.171415 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:22:02 crc kubenswrapper[4680]: I0126 18:22:02.169577 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:22:02 crc kubenswrapper[4680]: E0126 18:22:02.170343 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:22:17 crc kubenswrapper[4680]: I0126 18:22:17.170735 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:22:17 crc kubenswrapper[4680]: E0126 18:22:17.172394 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.046988 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7rs8s"] Jan 26 18:22:24 crc kubenswrapper[4680]: E0126 18:22:24.050061 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="extract-utilities" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.050107 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="extract-utilities" Jan 26 18:22:24 crc kubenswrapper[4680]: E0126 18:22:24.050149 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="registry-server" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.050156 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="registry-server" Jan 26 18:22:24 crc kubenswrapper[4680]: E0126 18:22:24.050171 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="extract-content" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.050176 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="extract-content" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.050495 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="782c4b9a-c7b2-47b0-a4bf-3691b9156b9f" containerName="registry-server" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.052745 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.251302 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvrn\" (UniqueName: \"kubernetes.io/projected/c6ecda44-ee98-40a8-9f37-51578f07f507-kube-api-access-2qvrn\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.251860 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-utilities\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.252282 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-catalog-content\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.266463 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rs8s"] Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.354878 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvrn\" (UniqueName: \"kubernetes.io/projected/c6ecda44-ee98-40a8-9f37-51578f07f507-kube-api-access-2qvrn\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.355022 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-utilities\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.355519 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-catalog-content\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.363427 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-utilities\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.363661 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-catalog-content\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.382346 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvrn\" (UniqueName: \"kubernetes.io/projected/c6ecda44-ee98-40a8-9f37-51578f07f507-kube-api-access-2qvrn\") pod \"redhat-marketplace-7rs8s\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:24 crc kubenswrapper[4680]: I0126 18:22:24.575168 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:25 crc kubenswrapper[4680]: I0126 18:22:25.248456 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rs8s"] Jan 26 18:22:25 crc kubenswrapper[4680]: I0126 18:22:25.647444 4680 generic.go:334] "Generic (PLEG): container finished" podID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerID="451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6" exitCode=0 Jan 26 18:22:25 crc kubenswrapper[4680]: I0126 18:22:25.648079 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rs8s" event={"ID":"c6ecda44-ee98-40a8-9f37-51578f07f507","Type":"ContainerDied","Data":"451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6"} Jan 26 18:22:25 crc kubenswrapper[4680]: I0126 18:22:25.648492 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rs8s" event={"ID":"c6ecda44-ee98-40a8-9f37-51578f07f507","Type":"ContainerStarted","Data":"96cdd0f4bf4829bdae0c6f3be123f89c1afcd86cd74d6fe568d237e634795b3e"} Jan 26 18:22:27 crc kubenswrapper[4680]: I0126 18:22:27.668625 4680 generic.go:334] "Generic (PLEG): container finished" podID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerID="a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9" exitCode=0 Jan 26 18:22:27 crc kubenswrapper[4680]: I0126 18:22:27.668702 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rs8s" event={"ID":"c6ecda44-ee98-40a8-9f37-51578f07f507","Type":"ContainerDied","Data":"a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9"} Jan 26 18:22:28 crc kubenswrapper[4680]: I0126 18:22:28.679414 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rs8s" event={"ID":"c6ecda44-ee98-40a8-9f37-51578f07f507","Type":"ContainerStarted","Data":"41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69"} Jan 26 18:22:28 crc kubenswrapper[4680]: I0126 18:22:28.710311 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7rs8s" podStartSLOduration=2.27387037 podStartE2EDuration="4.709617571s" podCreationTimestamp="2026-01-26 18:22:24 +0000 UTC" firstStartedPulling="2026-01-26 18:22:25.65020189 +0000 UTC m=+8220.811474159" lastFinishedPulling="2026-01-26 18:22:28.085949091 +0000 UTC m=+8223.247221360" observedRunningTime="2026-01-26 18:22:28.700121444 +0000 UTC m=+8223.861393713" watchObservedRunningTime="2026-01-26 18:22:28.709617571 +0000 UTC m=+8223.870889830" Jan 26 18:22:31 crc kubenswrapper[4680]: I0126 18:22:31.170448 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:22:31 crc kubenswrapper[4680]: E0126 18:22:31.171276 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:22:34 crc kubenswrapper[4680]: I0126 18:22:34.576169 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:34 crc kubenswrapper[4680]: I0126 18:22:34.576856 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:34 crc kubenswrapper[4680]: I0126 18:22:34.629339 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:34 crc kubenswrapper[4680]: I0126 18:22:34.783811 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:34 crc kubenswrapper[4680]: I0126 18:22:34.877538 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rs8s"] Jan 26 18:22:36 crc kubenswrapper[4680]: I0126 18:22:36.749440 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7rs8s" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="registry-server" containerID="cri-o://41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69" gracePeriod=2 Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.336124 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.442138 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qvrn\" (UniqueName: \"kubernetes.io/projected/c6ecda44-ee98-40a8-9f37-51578f07f507-kube-api-access-2qvrn\") pod \"c6ecda44-ee98-40a8-9f37-51578f07f507\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.442291 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-catalog-content\") pod \"c6ecda44-ee98-40a8-9f37-51578f07f507\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.442363 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-utilities\") pod \"c6ecda44-ee98-40a8-9f37-51578f07f507\" (UID: \"c6ecda44-ee98-40a8-9f37-51578f07f507\") " Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.442920 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-utilities" (OuterVolumeSpecName: "utilities") pod "c6ecda44-ee98-40a8-9f37-51578f07f507" (UID: "c6ecda44-ee98-40a8-9f37-51578f07f507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.443566 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.454014 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ecda44-ee98-40a8-9f37-51578f07f507-kube-api-access-2qvrn" (OuterVolumeSpecName: "kube-api-access-2qvrn") pod "c6ecda44-ee98-40a8-9f37-51578f07f507" (UID: "c6ecda44-ee98-40a8-9f37-51578f07f507"). InnerVolumeSpecName "kube-api-access-2qvrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.470240 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6ecda44-ee98-40a8-9f37-51578f07f507" (UID: "c6ecda44-ee98-40a8-9f37-51578f07f507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.545354 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qvrn\" (UniqueName: \"kubernetes.io/projected/c6ecda44-ee98-40a8-9f37-51578f07f507-kube-api-access-2qvrn\") on node \"crc\" DevicePath \"\"" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.545393 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ecda44-ee98-40a8-9f37-51578f07f507-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.759951 4680 generic.go:334] "Generic (PLEG): container finished" podID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerID="41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69" exitCode=0 Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.759995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rs8s" event={"ID":"c6ecda44-ee98-40a8-9f37-51578f07f507","Type":"ContainerDied","Data":"41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69"} Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.760022 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rs8s" event={"ID":"c6ecda44-ee98-40a8-9f37-51578f07f507","Type":"ContainerDied","Data":"96cdd0f4bf4829bdae0c6f3be123f89c1afcd86cd74d6fe568d237e634795b3e"} Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.760038 4680 scope.go:117] "RemoveContainer" containerID="41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.760167 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rs8s" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.801369 4680 scope.go:117] "RemoveContainer" containerID="a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.804455 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rs8s"] Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.820967 4680 scope.go:117] "RemoveContainer" containerID="451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.826039 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rs8s"] Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.871348 4680 scope.go:117] "RemoveContainer" containerID="41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69" Jan 26 18:22:37 crc kubenswrapper[4680]: E0126 18:22:37.871826 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69\": container with ID starting with 41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69 not found: ID does not exist" containerID="41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.871871 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69"} err="failed to get container status \"41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69\": rpc error: code = NotFound desc = could not find container \"41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69\": container with ID starting with 41ab7a07e12d9bd785524f6a3c443d13a612b802afd062414a385c4c6c5d0a69 not found: ID does not exist" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.871898 4680 scope.go:117] "RemoveContainer" containerID="a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9" Jan 26 18:22:37 crc kubenswrapper[4680]: E0126 18:22:37.872224 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9\": container with ID starting with a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9 not found: ID does not exist" containerID="a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.872264 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9"} err="failed to get container status \"a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9\": rpc error: code = NotFound desc = could not find container \"a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9\": container with ID starting with a565df1327a938a64deb983b09e700a3d658e7a1a85b89879621348550995bf9 not found: ID does not exist" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.872293 4680 scope.go:117] "RemoveContainer" containerID="451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6" Jan 26 18:22:37 crc kubenswrapper[4680]: E0126 18:22:37.872529 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6\": container with ID starting with 451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6 not found: ID does not exist" containerID="451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6" Jan 26 18:22:37 crc kubenswrapper[4680]: I0126 18:22:37.872550 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6"} err="failed to get container status \"451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6\": rpc error: code = NotFound desc = could not find container \"451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6\": container with ID starting with 451b2fc987ea2ec507372fb7139e99b28a0b0dc1ad4f3645f12a8604a9b805f6 not found: ID does not exist" Jan 26 18:22:39 crc kubenswrapper[4680]: I0126 18:22:39.183426 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" path="/var/lib/kubelet/pods/c6ecda44-ee98-40a8-9f37-51578f07f507/volumes" Jan 26 18:22:43 crc kubenswrapper[4680]: I0126 18:22:43.170408 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:22:43 crc kubenswrapper[4680]: E0126 18:22:43.171204 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:22:55 crc kubenswrapper[4680]: I0126 18:22:55.176208 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:22:55 crc kubenswrapper[4680]: E0126 18:22:55.177160 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:23:07 crc kubenswrapper[4680]: I0126 18:23:07.170859 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:23:07 crc kubenswrapper[4680]: E0126 18:23:07.171641 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:23:22 crc kubenswrapper[4680]: I0126 18:23:22.170384 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:23:22 crc kubenswrapper[4680]: E0126 18:23:22.171160 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:23:37 crc kubenswrapper[4680]: I0126 18:23:37.170117 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:23:37 crc kubenswrapper[4680]: E0126 18:23:37.171027 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:23:52 crc kubenswrapper[4680]: I0126 18:23:52.171368 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:23:52 crc kubenswrapper[4680]: E0126 18:23:52.172385 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:24:05 crc kubenswrapper[4680]: I0126 18:24:05.176593 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:24:05 crc kubenswrapper[4680]: E0126 18:24:05.177471 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:24:16 crc kubenswrapper[4680]: I0126 18:24:16.169745 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:24:16 crc kubenswrapper[4680]: E0126 18:24:16.170570 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:24:27 crc kubenswrapper[4680]: I0126 18:24:27.172279 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:24:27 crc kubenswrapper[4680]: E0126 18:24:27.175647 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:24:39 crc kubenswrapper[4680]: I0126 18:24:39.170158 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:24:39 crc kubenswrapper[4680]: E0126 18:24:39.171150 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:24:54 crc kubenswrapper[4680]: I0126 18:24:54.169925 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:24:54 crc kubenswrapper[4680]: E0126 18:24:54.171621 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:25:08 crc kubenswrapper[4680]: I0126 18:25:08.169532 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:25:08 crc kubenswrapper[4680]: E0126 18:25:08.170313 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:25:23 crc kubenswrapper[4680]: I0126 18:25:23.169691 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:25:23 crc kubenswrapper[4680]: E0126 18:25:23.170446 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:25:37 crc kubenswrapper[4680]: I0126 18:25:37.170272 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:25:37 crc kubenswrapper[4680]: E0126 18:25:37.171102 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:25:49 crc kubenswrapper[4680]: I0126 18:25:49.170324 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:25:49 crc kubenswrapper[4680]: E0126 18:25:49.171090 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:26:03 crc kubenswrapper[4680]: I0126 18:26:03.170843 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:26:03 crc kubenswrapper[4680]: E0126 18:26:03.171646 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:26:16 crc kubenswrapper[4680]: I0126 18:26:16.170331 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:26:16 crc kubenswrapper[4680]: E0126 18:26:16.171085 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qr4fm_openshift-machine-config-operator(4cbae131-7d55-4573-b849-5a223c64ffa7)\"" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" Jan 26 18:26:28 crc kubenswrapper[4680]: I0126 18:26:28.169475 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:26:28 crc kubenswrapper[4680]: I0126 18:26:28.987129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"35c15b3830699afde6ae4fb6322f1db085f5cb45ccacf878dc68f1b1ddd24498"} Jan 26 18:28:46 crc kubenswrapper[4680]: I0126 18:28:46.981375 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:28:46 crc kubenswrapper[4680]: I0126 18:28:46.982571 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:29:16 crc kubenswrapper[4680]: I0126 18:29:16.981508 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:29:16 crc kubenswrapper[4680]: I0126 18:29:16.983180 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:29:31 crc kubenswrapper[4680]: I0126 18:29:31.802786 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"3c38590c-46c7-4af3-8791-04b8c4830b6f","Type":"ContainerDied","Data":"920a287113b5f2d348d07808d9064569bfb2b6458b1911a4339ee880f6349c5b"} Jan 26 18:29:31 crc kubenswrapper[4680]: I0126 18:29:31.805359 4680 generic.go:334] "Generic (PLEG): container finished" podID="3c38590c-46c7-4af3-8791-04b8c4830b6f" containerID="920a287113b5f2d348d07808d9064569bfb2b6458b1911a4339ee880f6349c5b" exitCode=1 Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.539612 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.616629 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ssh-key\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.616993 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-config-data\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.617221 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.617333 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ca-certs\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.617466 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.617594 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-workdir\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.617754 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config-secret\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.617986 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-temporary\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.618162 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df47b\" (UniqueName: \"kubernetes.io/projected/3c38590c-46c7-4af3-8791-04b8c4830b6f-kube-api-access-df47b\") pod \"3c38590c-46c7-4af3-8791-04b8c4830b6f\" (UID: \"3c38590c-46c7-4af3-8791-04b8c4830b6f\") " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.629835 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.641713 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c38590c-46c7-4af3-8791-04b8c4830b6f-kube-api-access-df47b" (OuterVolumeSpecName: "kube-api-access-df47b") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "kube-api-access-df47b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.647485 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.652162 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.659513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-config-data" (OuterVolumeSpecName: "config-data") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.723420 4680 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.723667 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df47b\" (UniqueName: \"kubernetes.io/projected/3c38590c-46c7-4af3-8791-04b8c4830b6f-kube-api-access-df47b\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.723681 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.723711 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.723945 4680 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c38590c-46c7-4af3-8791-04b8c4830b6f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.733454 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.790929 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.791135 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.799510 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3c38590c-46c7-4af3-8791-04b8c4830b6f" (UID: "3c38590c-46c7-4af3-8791-04b8c4830b6f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.800647 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.826616 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.826901 4680 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.826982 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.827055 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.827164 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c38590c-46c7-4af3-8791-04b8c4830b6f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.854334 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"3c38590c-46c7-4af3-8791-04b8c4830b6f","Type":"ContainerDied","Data":"44f76a6e359304fc983593e6aecf3f0cc5496b1b0142c9be9ea2ca4282a701f0"} Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.855933 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 18:29:33 crc kubenswrapper[4680]: I0126 18:29:33.859771 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f76a6e359304fc983593e6aecf3f0cc5496b1b0142c9be9ea2ca4282a701f0" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.639881 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 18:29:42 crc kubenswrapper[4680]: E0126 18:29:42.641602 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="extract-content" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.641621 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="extract-content" Jan 26 18:29:42 crc kubenswrapper[4680]: E0126 18:29:42.641641 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="registry-server" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.641649 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="registry-server" Jan 26 18:29:42 crc kubenswrapper[4680]: E0126 18:29:42.641677 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="extract-utilities" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.641686 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="extract-utilities" Jan 26 18:29:42 crc kubenswrapper[4680]: E0126 18:29:42.641710 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38590c-46c7-4af3-8791-04b8c4830b6f" containerName="tempest-tests-tempest-tests-runner" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.641717 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38590c-46c7-4af3-8791-04b8c4830b6f" containerName="tempest-tests-tempest-tests-runner" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.641894 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c38590c-46c7-4af3-8791-04b8c4830b6f" containerName="tempest-tests-tempest-tests-runner" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.641912 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ecda44-ee98-40a8-9f37-51578f07f507" containerName="registry-server" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.643455 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.647105 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pzhsj" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.707257 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.804994 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.805171 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmmw\" (UniqueName: \"kubernetes.io/projected/e353b660-7977-4278-b0ec-2dea00adb001-kube-api-access-fwmmw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.908155 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.908670 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmmw\" (UniqueName: \"kubernetes.io/projected/e353b660-7977-4278-b0ec-2dea00adb001-kube-api-access-fwmmw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.909522 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.939572 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmmw\" (UniqueName: \"kubernetes.io/projected/e353b660-7977-4278-b0ec-2dea00adb001-kube-api-access-fwmmw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.943492 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e353b660-7977-4278-b0ec-2dea00adb001\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:42 crc kubenswrapper[4680]: I0126 18:29:42.964229 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 18:29:43 crc kubenswrapper[4680]: I0126 18:29:43.506142 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 18:29:43 crc kubenswrapper[4680]: I0126 18:29:43.514652 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:29:43 crc kubenswrapper[4680]: I0126 18:29:43.939454 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e353b660-7977-4278-b0ec-2dea00adb001","Type":"ContainerStarted","Data":"2d43f2e7225a3e70c5984c094884a1ca35698316122e133739ee78842bf9f561"} Jan 26 18:29:44 crc kubenswrapper[4680]: I0126 18:29:44.951398 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e353b660-7977-4278-b0ec-2dea00adb001","Type":"ContainerStarted","Data":"476c8b75b8e668c9255c78f1f9f8e7f439394ac21d007f553652695a3815face"} Jan 26 18:29:44 crc kubenswrapper[4680]: I0126 18:29:44.974450 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.9906244549999998 podStartE2EDuration="2.973587928s" podCreationTimestamp="2026-01-26 18:29:42 +0000 UTC" firstStartedPulling="2026-01-26 18:29:43.513804492 +0000 UTC m=+8658.675076761" lastFinishedPulling="2026-01-26 18:29:44.496767965 +0000 UTC m=+8659.658040234" observedRunningTime="2026-01-26 18:29:44.967788625 +0000 UTC m=+8660.129060894" watchObservedRunningTime="2026-01-26 18:29:44.973587928 +0000 UTC m=+8660.134860197" Jan 26 18:29:46 crc kubenswrapper[4680]: I0126 18:29:46.981213 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:29:46 crc kubenswrapper[4680]: I0126 18:29:46.982232 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:29:46 crc kubenswrapper[4680]: I0126 18:29:46.982788 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" Jan 26 18:29:46 crc kubenswrapper[4680]: I0126 18:29:46.984276 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35c15b3830699afde6ae4fb6322f1db085f5cb45ccacf878dc68f1b1ddd24498"} pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:29:46 crc kubenswrapper[4680]: I0126 18:29:46.984350 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" containerID="cri-o://35c15b3830699afde6ae4fb6322f1db085f5cb45ccacf878dc68f1b1ddd24498" gracePeriod=600 Jan 26 18:29:47 crc kubenswrapper[4680]: I0126 18:29:47.982206 4680 generic.go:334] "Generic (PLEG): container finished" podID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerID="35c15b3830699afde6ae4fb6322f1db085f5cb45ccacf878dc68f1b1ddd24498" exitCode=0 Jan 26 18:29:47 crc kubenswrapper[4680]: I0126 18:29:47.982733 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerDied","Data":"35c15b3830699afde6ae4fb6322f1db085f5cb45ccacf878dc68f1b1ddd24498"} Jan 26 18:29:47 crc kubenswrapper[4680]: I0126 18:29:47.982766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" event={"ID":"4cbae131-7d55-4573-b849-5a223c64ffa7","Type":"ContainerStarted","Data":"c33cd2d61be9731454f8aa646c001f0d3b77dcb9e025eac656701441ea4ca098"} Jan 26 18:29:47 crc kubenswrapper[4680]: I0126 18:29:47.982787 4680 scope.go:117] "RemoveContainer" containerID="8a9df212a468b4c61d2fffc971e23304f9f537ad93bc1d66251965f700b6aad4" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.173194 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c"] Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.175018 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.177097 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.178043 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.245802 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c"] Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.361587 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3949be63-f167-4b8e-b078-f552ade5950b-secret-volume\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.361755 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzm4j\" (UniqueName: \"kubernetes.io/projected/3949be63-f167-4b8e-b078-f552ade5950b-kube-api-access-nzm4j\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.361937 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3949be63-f167-4b8e-b078-f552ade5950b-config-volume\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.463135 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3949be63-f167-4b8e-b078-f552ade5950b-secret-volume\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.463298 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzm4j\" (UniqueName: \"kubernetes.io/projected/3949be63-f167-4b8e-b078-f552ade5950b-kube-api-access-nzm4j\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.463358 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3949be63-f167-4b8e-b078-f552ade5950b-config-volume\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.464346 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3949be63-f167-4b8e-b078-f552ade5950b-config-volume\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.468334 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3949be63-f167-4b8e-b078-f552ade5950b-secret-volume\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.479639 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzm4j\" (UniqueName: \"kubernetes.io/projected/3949be63-f167-4b8e-b078-f552ade5950b-kube-api-access-nzm4j\") pod \"collect-profiles-29490870-95r7c\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.493777 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:00 crc kubenswrapper[4680]: I0126 18:30:00.975215 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c"] Jan 26 18:30:01 crc kubenswrapper[4680]: I0126 18:30:01.096207 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" event={"ID":"3949be63-f167-4b8e-b078-f552ade5950b","Type":"ContainerStarted","Data":"99fcce76c23a2a92e257fbecfd75368bb81aa3042e5bf760a152a7abb3351b11"} Jan 26 18:30:02 crc kubenswrapper[4680]: I0126 18:30:02.106534 4680 generic.go:334] "Generic (PLEG): container finished" podID="3949be63-f167-4b8e-b078-f552ade5950b" containerID="349a2af64ea673f7db64585167d2482ab1306311ac8222ceb586dfeeac45fa21" exitCode=0 Jan 26 18:30:02 crc kubenswrapper[4680]: I0126 18:30:02.106590 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" event={"ID":"3949be63-f167-4b8e-b078-f552ade5950b","Type":"ContainerDied","Data":"349a2af64ea673f7db64585167d2482ab1306311ac8222ceb586dfeeac45fa21"} Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.689717 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.823656 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzm4j\" (UniqueName: \"kubernetes.io/projected/3949be63-f167-4b8e-b078-f552ade5950b-kube-api-access-nzm4j\") pod \"3949be63-f167-4b8e-b078-f552ade5950b\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.823978 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3949be63-f167-4b8e-b078-f552ade5950b-secret-volume\") pod \"3949be63-f167-4b8e-b078-f552ade5950b\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.824264 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3949be63-f167-4b8e-b078-f552ade5950b-config-volume\") pod \"3949be63-f167-4b8e-b078-f552ade5950b\" (UID: \"3949be63-f167-4b8e-b078-f552ade5950b\") " Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.826867 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3949be63-f167-4b8e-b078-f552ade5950b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3949be63-f167-4b8e-b078-f552ade5950b" (UID: "3949be63-f167-4b8e-b078-f552ade5950b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.833022 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3949be63-f167-4b8e-b078-f552ade5950b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3949be63-f167-4b8e-b078-f552ade5950b" (UID: "3949be63-f167-4b8e-b078-f552ade5950b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.837188 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3949be63-f167-4b8e-b078-f552ade5950b-kube-api-access-nzm4j" (OuterVolumeSpecName: "kube-api-access-nzm4j") pod "3949be63-f167-4b8e-b078-f552ade5950b" (UID: "3949be63-f167-4b8e-b078-f552ade5950b"). InnerVolumeSpecName "kube-api-access-nzm4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.926846 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzm4j\" (UniqueName: \"kubernetes.io/projected/3949be63-f167-4b8e-b078-f552ade5950b-kube-api-access-nzm4j\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.926884 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3949be63-f167-4b8e-b078-f552ade5950b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:03 crc kubenswrapper[4680]: I0126 18:30:03.926894 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3949be63-f167-4b8e-b078-f552ade5950b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:04 crc kubenswrapper[4680]: I0126 18:30:04.127800 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" event={"ID":"3949be63-f167-4b8e-b078-f552ade5950b","Type":"ContainerDied","Data":"99fcce76c23a2a92e257fbecfd75368bb81aa3042e5bf760a152a7abb3351b11"} Jan 26 18:30:04 crc kubenswrapper[4680]: I0126 18:30:04.127847 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99fcce76c23a2a92e257fbecfd75368bb81aa3042e5bf760a152a7abb3351b11" Jan 26 18:30:04 crc kubenswrapper[4680]: I0126 18:30:04.127932 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-95r7c" Jan 26 18:30:04 crc kubenswrapper[4680]: I0126 18:30:04.772686 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz"] Jan 26 18:30:04 crc kubenswrapper[4680]: I0126 18:30:04.783956 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-lcqjz"] Jan 26 18:30:05 crc kubenswrapper[4680]: I0126 18:30:05.180899 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c59e01a-f156-45c4-bdfe-1e1abaabaf84" path="/var/lib/kubelet/pods/7c59e01a-f156-45c4-bdfe-1e1abaabaf84/volumes" Jan 26 18:30:48 crc kubenswrapper[4680]: I0126 18:30:48.998773 4680 scope.go:117] "RemoveContainer" containerID="c5c9254e0a030802c08a6cae10c595b612102cc628e7c41da2fd05835beb9ddc" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.244660 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fxtnp"] Jan 26 18:30:55 crc kubenswrapper[4680]: E0126 18:30:55.245527 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3949be63-f167-4b8e-b078-f552ade5950b" containerName="collect-profiles" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.245539 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3949be63-f167-4b8e-b078-f552ade5950b" containerName="collect-profiles" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.245935 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3949be63-f167-4b8e-b078-f552ade5950b" containerName="collect-profiles" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.248084 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.260957 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxtnp"] Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.355260 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-utilities\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.355480 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjjr\" (UniqueName: \"kubernetes.io/projected/e46988ea-f27a-4000-92bf-397fbd955416-kube-api-access-dxjjr\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.355575 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-catalog-content\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.457628 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-utilities\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.457775 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxjjr\" (UniqueName: \"kubernetes.io/projected/e46988ea-f27a-4000-92bf-397fbd955416-kube-api-access-dxjjr\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.457828 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-catalog-content\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.458209 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-utilities\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.458257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-catalog-content\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.479841 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxjjr\" (UniqueName: \"kubernetes.io/projected/e46988ea-f27a-4000-92bf-397fbd955416-kube-api-access-dxjjr\") pod \"community-operators-fxtnp\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:55 crc kubenswrapper[4680]: I0126 18:30:55.608280 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:30:56 crc kubenswrapper[4680]: I0126 18:30:56.114886 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxtnp"] Jan 26 18:30:56 crc kubenswrapper[4680]: I0126 18:30:56.603042 4680 generic.go:334] "Generic (PLEG): container finished" podID="e46988ea-f27a-4000-92bf-397fbd955416" containerID="994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c" exitCode=0 Jan 26 18:30:56 crc kubenswrapper[4680]: I0126 18:30:56.603280 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerDied","Data":"994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c"} Jan 26 18:30:56 crc kubenswrapper[4680]: I0126 18:30:56.603351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerStarted","Data":"cb28a8164819c9158b966dbed1ec23fe2638a92416ca3ffc85b71e941d4f1c7c"} Jan 26 18:30:58 crc kubenswrapper[4680]: I0126 18:30:58.621105 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerStarted","Data":"b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5"} Jan 26 18:31:00 crc kubenswrapper[4680]: I0126 18:31:00.638327 4680 generic.go:334] "Generic (PLEG): container finished" podID="e46988ea-f27a-4000-92bf-397fbd955416" containerID="b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5" exitCode=0 Jan 26 18:31:00 crc kubenswrapper[4680]: I0126 18:31:00.638406 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerDied","Data":"b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5"} Jan 26 18:31:02 crc kubenswrapper[4680]: I0126 18:31:02.665750 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerStarted","Data":"0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302"} Jan 26 18:31:02 crc kubenswrapper[4680]: I0126 18:31:02.691802 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fxtnp" podStartSLOduration=2.892831083 podStartE2EDuration="7.69178187s" podCreationTimestamp="2026-01-26 18:30:55 +0000 UTC" firstStartedPulling="2026-01-26 18:30:56.60530451 +0000 UTC m=+8731.766576779" lastFinishedPulling="2026-01-26 18:31:01.404255297 +0000 UTC m=+8736.565527566" observedRunningTime="2026-01-26 18:31:02.681408948 +0000 UTC m=+8737.842681217" watchObservedRunningTime="2026-01-26 18:31:02.69178187 +0000 UTC m=+8737.853054139" Jan 26 18:31:05 crc kubenswrapper[4680]: I0126 18:31:05.609105 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:31:05 crc kubenswrapper[4680]: I0126 18:31:05.609581 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:31:05 crc kubenswrapper[4680]: I0126 18:31:05.655612 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.788085 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vszw"] Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.792790 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.823135 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vszw"] Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.876498 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-utilities\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.876722 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9c6\" (UniqueName: \"kubernetes.io/projected/08e7e24d-54e3-40ae-ac7c-06c267f3765b-kube-api-access-4t9c6\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.876788 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-catalog-content\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.980126 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t9c6\" (UniqueName: \"kubernetes.io/projected/08e7e24d-54e3-40ae-ac7c-06c267f3765b-kube-api-access-4t9c6\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.980254 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-catalog-content\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.980316 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-utilities\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.981262 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-utilities\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:13 crc kubenswrapper[4680]: I0126 18:31:13.981317 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-catalog-content\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:14 crc kubenswrapper[4680]: I0126 18:31:14.014920 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t9c6\" (UniqueName: \"kubernetes.io/projected/08e7e24d-54e3-40ae-ac7c-06c267f3765b-kube-api-access-4t9c6\") pod \"redhat-operators-2vszw\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:14 crc kubenswrapper[4680]: I0126 18:31:14.120554 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:14 crc kubenswrapper[4680]: I0126 18:31:14.675319 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vszw"] Jan 26 18:31:14 crc kubenswrapper[4680]: I0126 18:31:14.789991 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerStarted","Data":"824c4eef070e843fce46413ec0ccffa05bf1eb074eaea157b0ca097275b9c29c"} Jan 26 18:31:15 crc kubenswrapper[4680]: I0126 18:31:15.659608 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:31:15 crc kubenswrapper[4680]: I0126 18:31:15.799439 4680 generic.go:334] "Generic (PLEG): container finished" podID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerID="18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514" exitCode=0 Jan 26 18:31:15 crc kubenswrapper[4680]: I0126 18:31:15.799515 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerDied","Data":"18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514"} Jan 26 18:31:16 crc kubenswrapper[4680]: I0126 18:31:16.808679 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerStarted","Data":"10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3"} Jan 26 18:31:17 crc kubenswrapper[4680]: I0126 18:31:17.961087 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxtnp"] Jan 26 18:31:17 crc kubenswrapper[4680]: I0126 18:31:17.961509 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fxtnp" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="registry-server" containerID="cri-o://0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302" gracePeriod=2 Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.655817 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.785430 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-catalog-content\") pod \"e46988ea-f27a-4000-92bf-397fbd955416\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.785504 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-utilities\") pod \"e46988ea-f27a-4000-92bf-397fbd955416\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.785617 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxjjr\" (UniqueName: \"kubernetes.io/projected/e46988ea-f27a-4000-92bf-397fbd955416-kube-api-access-dxjjr\") pod \"e46988ea-f27a-4000-92bf-397fbd955416\" (UID: \"e46988ea-f27a-4000-92bf-397fbd955416\") " Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.787196 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-utilities" (OuterVolumeSpecName: "utilities") pod "e46988ea-f27a-4000-92bf-397fbd955416" (UID: "e46988ea-f27a-4000-92bf-397fbd955416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.793948 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46988ea-f27a-4000-92bf-397fbd955416-kube-api-access-dxjjr" (OuterVolumeSpecName: "kube-api-access-dxjjr") pod "e46988ea-f27a-4000-92bf-397fbd955416" (UID: "e46988ea-f27a-4000-92bf-397fbd955416"). InnerVolumeSpecName "kube-api-access-dxjjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.848505 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e46988ea-f27a-4000-92bf-397fbd955416" (UID: "e46988ea-f27a-4000-92bf-397fbd955416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.886876 4680 generic.go:334] "Generic (PLEG): container finished" podID="e46988ea-f27a-4000-92bf-397fbd955416" containerID="0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302" exitCode=0 Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.886927 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerDied","Data":"0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302"} Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.886959 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxtnp" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.886981 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtnp" event={"ID":"e46988ea-f27a-4000-92bf-397fbd955416","Type":"ContainerDied","Data":"cb28a8164819c9158b966dbed1ec23fe2638a92416ca3ffc85b71e941d4f1c7c"} Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.887005 4680 scope.go:117] "RemoveContainer" containerID="0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.888946 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.888979 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46988ea-f27a-4000-92bf-397fbd955416-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.888992 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxjjr\" (UniqueName: \"kubernetes.io/projected/e46988ea-f27a-4000-92bf-397fbd955416-kube-api-access-dxjjr\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.940601 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxtnp"] Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.960272 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fxtnp"] Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.963772 4680 scope.go:117] "RemoveContainer" containerID="b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5" Jan 26 18:31:18 crc kubenswrapper[4680]: I0126 18:31:18.997721 4680 scope.go:117] "RemoveContainer" containerID="994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.049642 4680 scope.go:117] "RemoveContainer" containerID="0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302" Jan 26 18:31:19 crc kubenswrapper[4680]: E0126 18:31:19.051032 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302\": container with ID starting with 0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302 not found: ID does not exist" containerID="0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.051106 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302"} err="failed to get container status \"0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302\": rpc error: code = NotFound desc = could not find container \"0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302\": container with ID starting with 0d0d678994f57032fde2cff5b0ed7ca372f197a2316ed01e9c7fed321a402302 not found: ID does not exist" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.051139 4680 scope.go:117] "RemoveContainer" containerID="b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5" Jan 26 18:31:19 crc kubenswrapper[4680]: E0126 18:31:19.051482 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5\": container with ID starting with b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5 not found: ID does not exist" containerID="b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.051518 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5"} err="failed to get container status \"b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5\": rpc error: code = NotFound desc = could not find container \"b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5\": container with ID starting with b9a3639461de0d0d0e4ae08e8825f5210703862598b6afcd8848290ea7f900a5 not found: ID does not exist" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.051724 4680 scope.go:117] "RemoveContainer" containerID="994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c" Jan 26 18:31:19 crc kubenswrapper[4680]: E0126 18:31:19.052504 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c\": container with ID starting with 994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c not found: ID does not exist" containerID="994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.052578 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c"} err="failed to get container status \"994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c\": rpc error: code = NotFound desc = could not find container \"994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c\": container with ID starting with 994a1dfb347dfae53321bca99e6996b24fda75674941e164a6db5a1682991e9c not found: ID does not exist" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.180746 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46988ea-f27a-4000-92bf-397fbd955416" path="/var/lib/kubelet/pods/e46988ea-f27a-4000-92bf-397fbd955416/volumes" Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.898724 4680 generic.go:334] "Generic (PLEG): container finished" podID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerID="10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3" exitCode=0 Jan 26 18:31:19 crc kubenswrapper[4680]: I0126 18:31:19.898808 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerDied","Data":"10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3"} Jan 26 18:31:21 crc kubenswrapper[4680]: I0126 18:31:21.921658 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerStarted","Data":"09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146"} Jan 26 18:31:24 crc kubenswrapper[4680]: I0126 18:31:24.121183 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:24 crc kubenswrapper[4680]: I0126 18:31:24.121521 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:25 crc kubenswrapper[4680]: I0126 18:31:25.171547 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vszw" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="registry-server" probeResult="failure" output=< Jan 26 18:31:25 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:31:25 crc kubenswrapper[4680]: > Jan 26 18:31:35 crc kubenswrapper[4680]: I0126 18:31:35.171663 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vszw" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="registry-server" probeResult="failure" output=< Jan 26 18:31:35 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Jan 26 18:31:35 crc kubenswrapper[4680]: > Jan 26 18:31:44 crc kubenswrapper[4680]: I0126 18:31:44.170728 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:44 crc kubenswrapper[4680]: I0126 18:31:44.189354 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vszw" podStartSLOduration=26.267149304 podStartE2EDuration="31.189328086s" podCreationTimestamp="2026-01-26 18:31:13 +0000 UTC" firstStartedPulling="2026-01-26 18:31:15.803085614 +0000 UTC m=+8750.964357883" lastFinishedPulling="2026-01-26 18:31:20.725264396 +0000 UTC m=+8755.886536665" observedRunningTime="2026-01-26 18:31:21.944427479 +0000 UTC m=+8757.105699748" watchObservedRunningTime="2026-01-26 18:31:44.189328086 +0000 UTC m=+8779.350600355" Jan 26 18:31:44 crc kubenswrapper[4680]: I0126 18:31:44.222062 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:44 crc kubenswrapper[4680]: I0126 18:31:44.986456 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vszw"] Jan 26 18:31:45 crc kubenswrapper[4680]: I0126 18:31:45.466362 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vszw" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="registry-server" containerID="cri-o://09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146" gracePeriod=2 Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.019605 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.103038 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-utilities\") pod \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.103735 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-utilities" (OuterVolumeSpecName: "utilities") pod "08e7e24d-54e3-40ae-ac7c-06c267f3765b" (UID: "08e7e24d-54e3-40ae-ac7c-06c267f3765b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.103903 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t9c6\" (UniqueName: \"kubernetes.io/projected/08e7e24d-54e3-40ae-ac7c-06c267f3765b-kube-api-access-4t9c6\") pod \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.104316 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-catalog-content\") pod \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\" (UID: \"08e7e24d-54e3-40ae-ac7c-06c267f3765b\") " Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.105509 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.116008 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e7e24d-54e3-40ae-ac7c-06c267f3765b-kube-api-access-4t9c6" (OuterVolumeSpecName: "kube-api-access-4t9c6") pod "08e7e24d-54e3-40ae-ac7c-06c267f3765b" (UID: "08e7e24d-54e3-40ae-ac7c-06c267f3765b"). InnerVolumeSpecName "kube-api-access-4t9c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.208402 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t9c6\" (UniqueName: \"kubernetes.io/projected/08e7e24d-54e3-40ae-ac7c-06c267f3765b-kube-api-access-4t9c6\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.241034 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08e7e24d-54e3-40ae-ac7c-06c267f3765b" (UID: "08e7e24d-54e3-40ae-ac7c-06c267f3765b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.311824 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e7e24d-54e3-40ae-ac7c-06c267f3765b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.486383 4680 generic.go:334] "Generic (PLEG): container finished" podID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerID="09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146" exitCode=0 Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.486434 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerDied","Data":"09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146"} Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.486826 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vszw" event={"ID":"08e7e24d-54e3-40ae-ac7c-06c267f3765b","Type":"ContainerDied","Data":"824c4eef070e843fce46413ec0ccffa05bf1eb074eaea157b0ca097275b9c29c"} Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.486853 4680 scope.go:117] "RemoveContainer" containerID="09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.487040 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vszw" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.521982 4680 scope.go:117] "RemoveContainer" containerID="10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.528843 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vszw"] Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.545750 4680 scope.go:117] "RemoveContainer" containerID="18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.548340 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vszw"] Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.598219 4680 scope.go:117] "RemoveContainer" containerID="09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146" Jan 26 18:31:46 crc kubenswrapper[4680]: E0126 18:31:46.598708 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146\": container with ID starting with 09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146 not found: ID does not exist" containerID="09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.598753 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146"} err="failed to get container status \"09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146\": rpc error: code = NotFound desc = could not find container \"09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146\": container with ID starting with 09fe2942cae42dbf7666aff80212b6c3dda323dfb74c02195cc13dfd90911146 not found: ID does not exist" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.598780 4680 scope.go:117] "RemoveContainer" containerID="10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3" Jan 26 18:31:46 crc kubenswrapper[4680]: E0126 18:31:46.599294 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3\": container with ID starting with 10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3 not found: ID does not exist" containerID="10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.599324 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3"} err="failed to get container status \"10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3\": rpc error: code = NotFound desc = could not find container \"10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3\": container with ID starting with 10c80e389d605fdd29ddef88bcf4f7f2180dfb3de0725370d26e454f70316de3 not found: ID does not exist" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.599345 4680 scope.go:117] "RemoveContainer" containerID="18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514" Jan 26 18:31:46 crc kubenswrapper[4680]: E0126 18:31:46.599800 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514\": container with ID starting with 18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514 not found: ID does not exist" containerID="18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514" Jan 26 18:31:46 crc kubenswrapper[4680]: I0126 18:31:46.599828 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514"} err="failed to get container status \"18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514\": rpc error: code = NotFound desc = could not find container \"18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514\": container with ID starting with 18b922197e48c5c38e35d2f5c067dfa9c815f261344b8b03b5270c88fc014514 not found: ID does not exist" Jan 26 18:31:47 crc kubenswrapper[4680]: I0126 18:31:47.186035 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" path="/var/lib/kubelet/pods/08e7e24d-54e3-40ae-ac7c-06c267f3765b/volumes" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.513408 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jxktn"] Jan 26 18:32:06 crc kubenswrapper[4680]: E0126 18:32:06.514348 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="registry-server" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.514704 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="registry-server" Jan 26 18:32:06 crc kubenswrapper[4680]: E0126 18:32:06.514742 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="extract-content" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.514751 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="extract-content" Jan 26 18:32:06 crc kubenswrapper[4680]: E0126 18:32:06.514783 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="extract-content" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.514791 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="extract-content" Jan 26 18:32:06 crc kubenswrapper[4680]: E0126 18:32:06.514802 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="extract-utilities" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.514810 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="extract-utilities" Jan 26 18:32:06 crc kubenswrapper[4680]: E0126 18:32:06.514825 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="extract-utilities" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.514833 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="extract-utilities" Jan 26 18:32:06 crc kubenswrapper[4680]: E0126 18:32:06.514849 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="registry-server" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.514857 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="registry-server" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.515132 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e7e24d-54e3-40ae-ac7c-06c267f3765b" containerName="registry-server" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.515161 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46988ea-f27a-4000-92bf-397fbd955416" containerName="registry-server" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.519191 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.538177 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxktn"] Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.630754 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-catalog-content\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.630845 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-utilities\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.631035 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wc5n\" (UniqueName: \"kubernetes.io/projected/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-kube-api-access-9wc5n\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.734035 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-utilities\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.734121 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wc5n\" (UniqueName: \"kubernetes.io/projected/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-kube-api-access-9wc5n\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.734549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-catalog-content\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.735396 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-utilities\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.735421 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-catalog-content\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.759248 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wc5n\" (UniqueName: \"kubernetes.io/projected/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-kube-api-access-9wc5n\") pod \"certified-operators-jxktn\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:06 crc kubenswrapper[4680]: I0126 18:32:06.856574 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:07 crc kubenswrapper[4680]: I0126 18:32:07.330584 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxktn"] Jan 26 18:32:07 crc kubenswrapper[4680]: I0126 18:32:07.670187 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerStarted","Data":"1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87"} Jan 26 18:32:07 crc kubenswrapper[4680]: I0126 18:32:07.670236 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerStarted","Data":"023dc252b84e006cb87535f00c77d41026d53cdd5089aeba13c807f762885f28"} Jan 26 18:32:08 crc kubenswrapper[4680]: I0126 18:32:08.679863 4680 generic.go:334] "Generic (PLEG): container finished" podID="e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" containerID="1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87" exitCode=0 Jan 26 18:32:08 crc kubenswrapper[4680]: I0126 18:32:08.679919 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerDied","Data":"1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87"} Jan 26 18:32:08 crc kubenswrapper[4680]: I0126 18:32:08.681325 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerStarted","Data":"d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c"} Jan 26 18:32:09 crc kubenswrapper[4680]: I0126 18:32:09.692082 4680 generic.go:334] "Generic (PLEG): container finished" podID="e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" containerID="d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c" exitCode=0 Jan 26 18:32:09 crc kubenswrapper[4680]: I0126 18:32:09.694110 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerDied","Data":"d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c"} Jan 26 18:32:10 crc kubenswrapper[4680]: I0126 18:32:10.705687 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerStarted","Data":"7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145"} Jan 26 18:32:10 crc kubenswrapper[4680]: I0126 18:32:10.739032 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jxktn" podStartSLOduration=2.341422231 podStartE2EDuration="4.739014844s" podCreationTimestamp="2026-01-26 18:32:06 +0000 UTC" firstStartedPulling="2026-01-26 18:32:07.671665966 +0000 UTC m=+8802.832938235" lastFinishedPulling="2026-01-26 18:32:10.069258579 +0000 UTC m=+8805.230530848" observedRunningTime="2026-01-26 18:32:10.724056571 +0000 UTC m=+8805.885328840" watchObservedRunningTime="2026-01-26 18:32:10.739014844 +0000 UTC m=+8805.900287113" Jan 26 18:32:16 crc kubenswrapper[4680]: I0126 18:32:16.857863 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:16 crc kubenswrapper[4680]: I0126 18:32:16.858419 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:16 crc kubenswrapper[4680]: I0126 18:32:16.905231 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:16 crc kubenswrapper[4680]: I0126 18:32:16.980500 4680 patch_prober.go:28] interesting pod/machine-config-daemon-qr4fm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:32:16 crc kubenswrapper[4680]: I0126 18:32:16.980559 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qr4fm" podUID="4cbae131-7d55-4573-b849-5a223c64ffa7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:32:17 crc kubenswrapper[4680]: I0126 18:32:17.834331 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:17 crc kubenswrapper[4680]: I0126 18:32:17.893127 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxktn"] Jan 26 18:32:19 crc kubenswrapper[4680]: I0126 18:32:19.784504 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jxktn" podUID="e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" containerName="registry-server" containerID="cri-o://7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145" gracePeriod=2 Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.346240 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.544163 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-catalog-content\") pod \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.544225 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-utilities\") pod \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.544407 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wc5n\" (UniqueName: \"kubernetes.io/projected/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-kube-api-access-9wc5n\") pod \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\" (UID: \"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d\") " Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.545250 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-utilities" (OuterVolumeSpecName: "utilities") pod "e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" (UID: "e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.554088 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-kube-api-access-9wc5n" (OuterVolumeSpecName: "kube-api-access-9wc5n") pod "e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" (UID: "e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d"). InnerVolumeSpecName "kube-api-access-9wc5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.591580 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" (UID: "e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.646955 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.646996 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.647007 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wc5n\" (UniqueName: \"kubernetes.io/projected/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d-kube-api-access-9wc5n\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.795302 4680 generic.go:334] "Generic (PLEG): container finished" podID="e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" containerID="7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145" exitCode=0 Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.795346 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerDied","Data":"7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145"} Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.795373 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxktn" event={"ID":"e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d","Type":"ContainerDied","Data":"023dc252b84e006cb87535f00c77d41026d53cdd5089aeba13c807f762885f28"} Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.795395 4680 scope.go:117] "RemoveContainer" containerID="7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.795509 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxktn" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.829793 4680 scope.go:117] "RemoveContainer" containerID="d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.839252 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxktn"] Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.849690 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jxktn"] Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.855406 4680 scope.go:117] "RemoveContainer" containerID="1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.906770 4680 scope.go:117] "RemoveContainer" containerID="7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145" Jan 26 18:32:20 crc kubenswrapper[4680]: E0126 18:32:20.907634 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145\": container with ID starting with 7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145 not found: ID does not exist" containerID="7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.907677 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145"} err="failed to get container status \"7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145\": rpc error: code = NotFound desc = could not find container \"7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145\": container with ID starting with 7daf8558e4d2cdd72f0c725cf7982f80a6a29e43e56a766be5732d8dc8549145 not found: ID does not exist" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.907705 4680 scope.go:117] "RemoveContainer" containerID="d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c" Jan 26 18:32:20 crc kubenswrapper[4680]: E0126 18:32:20.908034 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c\": container with ID starting with d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c not found: ID does not exist" containerID="d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.908054 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c"} err="failed to get container status \"d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c\": rpc error: code = NotFound desc = could not find container \"d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c\": container with ID starting with d78e4051b0055def81d874c672142a2b4faf30a70e9e305623507a4baeed501c not found: ID does not exist" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.908069 4680 scope.go:117] "RemoveContainer" containerID="1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87" Jan 26 18:32:20 crc kubenswrapper[4680]: E0126 18:32:20.908491 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87\": container with ID starting with 1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87 not found: ID does not exist" containerID="1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87" Jan 26 18:32:20 crc kubenswrapper[4680]: I0126 18:32:20.908527 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87"} err="failed to get container status \"1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87\": rpc error: code = NotFound desc = could not find container \"1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87\": container with ID starting with 1efc4a06f82cfc91f4f4671e88c8db618b672099f8efece7b6fa2a02ba67dc87 not found: ID does not exist" Jan 26 18:32:21 crc kubenswrapper[4680]: I0126 18:32:21.182607 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d" path="/var/lib/kubelet/pods/e44dc3d3-b5a2-4598-9b37-bcd768ac1d4d/volumes"